Mar 14 08:56:53 crc systemd[1]: Starting Kubernetes Kubelet... Mar 14 08:56:53 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:53 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:56:54 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 14 08:56:55 crc kubenswrapper[4687]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.475974 4687 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478688 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478706 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478711 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478715 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478719 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478723 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478727 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478731 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478734 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478738 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478742 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478745 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478749 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478754 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478758 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478763 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478775 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478779 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478783 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478787 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478790 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478794 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478798 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478802 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478805 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478810 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478814 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478818 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478821 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478825 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478830 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478835 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478839 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478843 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478846 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478850 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478854 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478857 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478862 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478866 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478870 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478874 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478877 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478881 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478885 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478889 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478892 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478896 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478899 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478904 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478907 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478912 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478916 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478919 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478923 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478926 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478929 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478933 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478936 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478940 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478944 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478947 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478950 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478954 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478957 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478961 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478964 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478967 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478971 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478974 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.478978 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481188 4687 flags.go:64] FLAG: --address="0.0.0.0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481203 4687 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481211 4687 flags.go:64] FLAG: --anonymous-auth="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481216 4687 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481222 4687 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481227 4687 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481232 4687 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481238 4687 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481242 4687 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481247 4687 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481251 4687 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481256 4687 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481260 4687 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481264 4687 flags.go:64] FLAG: --cgroup-root="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481268 4687 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481272 4687 flags.go:64] FLAG: --client-ca-file="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481276 4687 flags.go:64] FLAG: --cloud-config="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481281 4687 flags.go:64] FLAG: --cloud-provider="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481286 4687 flags.go:64] FLAG: --cluster-dns="[]" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481292 4687 flags.go:64] FLAG: --cluster-domain="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481296 4687 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481300 4687 flags.go:64] FLAG: --config-dir="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481304 4687 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481309 4687 flags.go:64] FLAG: --container-log-max-files="5" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481314 4687 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481319 4687 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481323 4687 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481328 4687 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481348 4687 flags.go:64] FLAG: --contention-profiling="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481353 4687 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481357 4687 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481363 4687 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481367 4687 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481373 4687 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481377 4687 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481381 4687 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481385 4687 flags.go:64] FLAG: --enable-load-reader="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481390 4687 flags.go:64] FLAG: --enable-server="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481394 4687 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481400 4687 flags.go:64] FLAG: --event-burst="100" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481404 4687 flags.go:64] FLAG: --event-qps="50" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481408 4687 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481412 4687 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481417 4687 flags.go:64] FLAG: --eviction-hard="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481422 4687 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481426 4687 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481431 4687 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481435 4687 flags.go:64] FLAG: --eviction-soft="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481439 4687 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481443 4687 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481448 4687 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481452 4687 flags.go:64] FLAG: --experimental-mounter-path="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481456 4687 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481461 4687 flags.go:64] FLAG: --fail-swap-on="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481465 4687 flags.go:64] FLAG: --feature-gates="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481471 4687 flags.go:64] FLAG: --file-check-frequency="20s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481475 4687 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481480 4687 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481484 4687 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481530 4687 flags.go:64] FLAG: --healthz-port="10248" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481535 4687 flags.go:64] FLAG: --help="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481539 4687 flags.go:64] FLAG: --hostname-override="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481543 4687 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481548 4687 flags.go:64] FLAG: --http-check-frequency="20s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481552 4687 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481557 4687 flags.go:64] FLAG: --image-credential-provider-config="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481561 4687 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481565 4687 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481569 4687 flags.go:64] FLAG: --image-service-endpoint="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481573 4687 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481577 4687 flags.go:64] FLAG: --kube-api-burst="100" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481581 4687 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481585 4687 flags.go:64] FLAG: --kube-api-qps="50" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481589 4687 flags.go:64] FLAG: --kube-reserved="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481594 4687 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481597 4687 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481602 4687 flags.go:64] FLAG: --kubelet-cgroups="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481606 4687 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481610 4687 flags.go:64] FLAG: --lock-file="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481614 4687 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481618 4687 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481622 4687 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481630 4687 flags.go:64] FLAG: --log-json-split-stream="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481634 4687 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481639 4687 flags.go:64] FLAG: --log-text-split-stream="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481643 4687 flags.go:64] FLAG: --logging-format="text" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481647 4687 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481651 4687 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481656 4687 flags.go:64] FLAG: --manifest-url="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481660 4687 flags.go:64] FLAG: --manifest-url-header="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481666 4687 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481671 4687 flags.go:64] FLAG: --max-open-files="1000000" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481676 4687 flags.go:64] FLAG: --max-pods="110" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481681 4687 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481685 4687 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481689 4687 flags.go:64] FLAG: --memory-manager-policy="None" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481693 4687 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481697 4687 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481701 4687 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481706 4687 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481717 4687 flags.go:64] FLAG: --node-status-max-images="50" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481721 4687 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481725 4687 flags.go:64] FLAG: --oom-score-adj="-999" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481729 4687 flags.go:64] FLAG: --pod-cidr="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481733 4687 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481739 4687 flags.go:64] FLAG: --pod-manifest-path="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481743 4687 flags.go:64] FLAG: --pod-max-pids="-1" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481747 4687 flags.go:64] FLAG: --pods-per-core="0" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481751 4687 flags.go:64] FLAG: --port="10250" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481756 4687 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481760 4687 flags.go:64] FLAG: --provider-id="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481763 4687 flags.go:64] FLAG: --qos-reserved="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481768 4687 flags.go:64] FLAG: --read-only-port="10255" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481773 4687 flags.go:64] FLAG: --register-node="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481778 4687 flags.go:64] FLAG: --register-schedulable="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481784 4687 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481801 4687 flags.go:64] FLAG: --registry-burst="10" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481806 4687 flags.go:64] FLAG: --registry-qps="5" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481812 4687 flags.go:64] FLAG: --reserved-cpus="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481817 4687 flags.go:64] FLAG: --reserved-memory="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481825 4687 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481831 4687 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481837 4687 flags.go:64] FLAG: --rotate-certificates="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481842 4687 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481848 4687 flags.go:64] FLAG: --runonce="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481855 4687 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481861 4687 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481867 4687 flags.go:64] FLAG: --seccomp-default="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481872 4687 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481877 4687 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481882 4687 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481886 4687 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481891 4687 flags.go:64] FLAG: --storage-driver-password="root" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481895 4687 flags.go:64] FLAG: --storage-driver-secure="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481899 4687 flags.go:64] FLAG: --storage-driver-table="stats" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481903 4687 flags.go:64] FLAG: --storage-driver-user="root" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481907 4687 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481912 4687 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481916 4687 flags.go:64] FLAG: --system-cgroups="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481920 4687 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481927 4687 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481931 4687 flags.go:64] FLAG: --tls-cert-file="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481935 4687 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481940 4687 flags.go:64] FLAG: --tls-min-version="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481945 4687 flags.go:64] FLAG: --tls-private-key-file="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481949 4687 flags.go:64] FLAG: --topology-manager-policy="none" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481953 4687 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481957 4687 flags.go:64] FLAG: --topology-manager-scope="container" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481962 4687 flags.go:64] FLAG: --v="2" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481968 4687 flags.go:64] FLAG: --version="false" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481974 4687 flags.go:64] FLAG: --vmodule="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481979 4687 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.481984 4687 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482078 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482083 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482088 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482092 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482097 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482100 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482104 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482108 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482112 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482116 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482120 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482123 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482127 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482130 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482134 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482137 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482141 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482144 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482148 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482151 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482155 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482158 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482161 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482165 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482168 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482172 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482176 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482181 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482186 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482189 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482193 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482197 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482201 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482205 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482209 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482213 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482216 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482219 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482223 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482227 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482230 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482234 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482237 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482241 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482245 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482248 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482252 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482255 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482259 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482262 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482265 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482269 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482273 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482276 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482282 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482286 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482290 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482294 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482298 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482302 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482306 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482310 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482314 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482318 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482321 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482325 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482328 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482347 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482351 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482356 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.482361 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.483102 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.495141 4687 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.495192 4687 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495321 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495369 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495382 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495394 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495405 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495416 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495427 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495436 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495444 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495453 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495461 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495469 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495477 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495488 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495501 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495511 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495521 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495529 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495540 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495550 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495561 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495621 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495631 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495639 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495649 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495659 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495667 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495675 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495684 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495691 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495699 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495707 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495715 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495723 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495730 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495739 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495747 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495755 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495762 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495771 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495778 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495786 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495794 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495802 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495809 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495817 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495826 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495834 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495842 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495850 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495857 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495865 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495873 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495881 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495892 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495901 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495910 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495919 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495927 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495935 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495943 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495952 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495960 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495968 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495977 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495986 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.495994 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496003 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496012 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496020 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496028 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.496041 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496267 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496282 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496293 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496303 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496311 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496319 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496328 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496434 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496443 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496451 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496459 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496466 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496474 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496483 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496491 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496498 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496506 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496514 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496521 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496530 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496660 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496673 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496684 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496694 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496703 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496714 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496723 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496730 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496738 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496746 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496754 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496761 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496769 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496776 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496785 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496793 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496800 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496811 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496819 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496830 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496839 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496846 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496854 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496862 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496870 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496878 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496886 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496894 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496902 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496909 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496918 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496926 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496933 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496941 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496948 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496957 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496964 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496973 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496980 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496988 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.496996 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497004 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497012 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497022 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497031 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497039 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497048 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497056 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497064 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497072 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.497080 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.497092 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.498113 4687 server.go:940] "Client rotation is on, will bootstrap in background" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.502798 4687 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.507371 4687 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.507511 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.509134 4687 server.go:997] "Starting client certificate rotation" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.509167 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.509375 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.533026 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.535978 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.536130 4687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.554372 4687 log.go:25] "Validated CRI v1 runtime API" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.589667 4687 log.go:25] "Validated CRI v1 image API" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.591997 4687 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.603040 4687 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-14-08-51-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.603084 4687 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.633979 4687 manager.go:217] Machine: {Timestamp:2026-03-14 08:56:55.630141991 +0000 UTC m=+0.618382406 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9c5f1646-8f12-408a-97a5-53cd4c1286c6 BootID:082ff9a8-763f-4c35-a8f4-a146ab033d00 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2f:1b:43 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2f:1b:43 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c0:b0:00 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bd:78:bd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:48:61:b4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e5:ed:ba Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:ef:30:1a:2b:b8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:5a:0e:6c:da:a8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.634384 4687 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.634595 4687 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.635029 4687 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.635320 4687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.635407 4687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.635765 4687 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.635783 4687 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.636306 4687 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.636376 4687 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.637433 4687 state_mem.go:36] "Initialized new in-memory state store" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.637578 4687 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.642814 4687 kubelet.go:418] "Attempting to sync node with API server" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.642859 4687 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.642909 4687 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.642930 4687 kubelet.go:324] "Adding apiserver pod source" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.642953 4687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.650080 4687 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.650859 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.650854 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.650989 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.650979 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.652694 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.654824 4687 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669870 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669915 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669924 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669930 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669941 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669949 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669955 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669966 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669975 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.669983 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.670004 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.670011 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.672862 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.673398 4687 server.go:1280] "Started kubelet" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.673608 4687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.674193 4687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 08:56:55 crc systemd[1]: Started Kubernetes Kubelet. Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.675473 4687 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.676126 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.676294 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.676327 4687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.676645 4687 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.676673 4687 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.676648 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.677305 4687 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.677789 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.677913 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.679461 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.683637 4687 server.go:460] "Adding debug handlers to kubelet server" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.683919 4687 factory.go:55] Registering systemd factory Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.683935 4687 factory.go:221] Registration of the systemd container factory successfully Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.685367 4687 factory.go:153] Registering CRI-O factory Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.685388 4687 factory.go:221] Registration of the crio container factory successfully Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.685456 4687 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.685481 4687 factory.go:103] Registering Raw factory Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.685500 4687 manager.go:1196] Started watching for new ooms in manager Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.686472 4687 manager.go:319] Starting recovery of all containers Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.685641 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694278 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694344 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694356 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694365 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694374 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694384 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694394 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694404 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694416 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694425 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694434 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694442 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694450 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694462 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694471 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694482 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694491 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694500 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694510 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694518 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694528 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694537 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694568 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694579 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694611 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694620 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694631 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694641 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694651 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694660 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694672 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694683 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694693 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694703 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694713 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694723 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694734 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694743 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694774 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694785 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694794 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694803 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694813 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694823 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694833 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694842 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694851 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694860 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694870 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694879 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694888 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694898 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694911 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694920 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694930 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694940 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694949 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694957 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694967 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.694997 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695006 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695015 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695048 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695059 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695071 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695108 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695117 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695127 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695135 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695146 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695157 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695168 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695178 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695188 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695197 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695206 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695215 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695225 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695233 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695243 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695251 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695260 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695269 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695280 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695291 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695302 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695314 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695343 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695356 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695369 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695382 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695395 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695406 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695416 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695428 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695439 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695449 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695459 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695487 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695497 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695506 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695515 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695524 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695532 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695546 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695555 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695689 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695699 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695709 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695719 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695728 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695739 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695749 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695758 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695767 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695776 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695785 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695795 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695805 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695813 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695823 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695834 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695843 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695851 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695862 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695870 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695880 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695890 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695901 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695935 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695945 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695954 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695964 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695973 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695983 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.695993 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.696003 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.696014 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.696024 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.696034 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698497 4687 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698532 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698549 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698568 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698581 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698592 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698604 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698616 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698629 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698672 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698682 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698716 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698727 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698738 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698748 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698759 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698771 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698783 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698793 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698808 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698818 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698829 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698840 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698852 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698862 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698874 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698887 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698899 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698911 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698922 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698934 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698947 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698982 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.698993 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699004 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699018 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699031 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699043 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699062 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699075 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.699088 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700136 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700150 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700164 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700178 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700192 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700206 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700219 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700235 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700279 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700294 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700308 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700349 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700363 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700376 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700389 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700401 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700415 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700427 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700441 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700453 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700466 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700486 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700499 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700511 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700525 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700539 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700554 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700566 4687 reconstruct.go:97] "Volume reconstruction finished" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.700575 4687 reconciler.go:26] "Reconciler: start to sync state" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.702895 4687 manager.go:324] Recovery completed Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.711146 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.712396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.712438 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.712448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.713540 4687 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.713557 4687 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.713576 4687 state_mem.go:36] "Initialized new in-memory state store" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.729481 4687 policy_none.go:49] "None policy: Start" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.730988 4687 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.731021 4687 state_mem.go:35] "Initializing new in-memory state store" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.733667 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.735550 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.735592 4687 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.735620 4687 kubelet.go:2335] "Starting kubelet main sync loop" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.735668 4687 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 08:56:55 crc kubenswrapper[4687]: W0314 08:56:55.737817 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.737950 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.777170 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.786521 4687 manager.go:334] "Starting Device Plugin manager" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.786650 4687 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.786667 4687 server.go:79] "Starting device plugin registration server" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.787076 4687 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.787096 4687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.787237 4687 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.787373 4687 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.787388 4687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.792874 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.836545 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.836692 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.837782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.837824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.837833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.837978 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.838246 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.838353 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.838850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.838880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.838905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.839023 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.839145 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.839180 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.840891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.840949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.840964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841452 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841507 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841666 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.841819 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.842930 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.842956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.842969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843120 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843161 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843302 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843356 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843915 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.843963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.844087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.844113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.844124 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.844222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.844266 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.845539 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.845565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.845573 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.881246 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.887195 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.888076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.888103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.888113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.888135 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:56:55 crc kubenswrapper[4687]: E0314 08:56:55.888505 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903504 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903541 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903557 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903838 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903891 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903904 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:55 crc kubenswrapper[4687]: I0314 08:56:55.903935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004434 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004484 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004508 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004547 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004614 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004665 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004691 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004735 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004767 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004784 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004806 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004889 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.004943 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.005019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.088766 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.089990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.090063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.090079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.090163 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.090813 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.174934 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.192456 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.207957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.675889 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.675914 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.676193 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.676207 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.676412 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.676574 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.676588 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.678724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.678774 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.678794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:56 crc kubenswrapper[4687]: I0314 08:56:56.678836 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.679449 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.699317 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.699458 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.736633 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fec3f9e252ebee54ea09bd1a939990a6d8655bedec12391d7677314d3d9171d1 WatchSource:0}: Error finding container fec3f9e252ebee54ea09bd1a939990a6d8655bedec12391d7677314d3d9171d1: Status 404 returned error can't find the container with id fec3f9e252ebee54ea09bd1a939990a6d8655bedec12391d7677314d3d9171d1 Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.737475 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d83e80a30db86d7e1f52074666aaaebdb953cfacdf0f751cb3913e8f02772782 WatchSource:0}: Error finding container d83e80a30db86d7e1f52074666aaaebdb953cfacdf0f751cb3913e8f02772782: Status 404 returned error can't find the container with id d83e80a30db86d7e1f52074666aaaebdb953cfacdf0f751cb3913e8f02772782 Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.739484 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ec97843d882b182a8148ed0f7dde04a782d65c686d740c459a584cd7d21ae83d WatchSource:0}: Error finding container ec97843d882b182a8148ed0f7dde04a782d65c686d740c459a584cd7d21ae83d: Status 404 returned error can't find the container with id ec97843d882b182a8148ed0f7dde04a782d65c686d740c459a584cd7d21ae83d Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.740625 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-211594d76097dca76bc45a5cb4380cecb18dcd96395e53b60b0ecd2e1d4b02df WatchSource:0}: Error finding container 211594d76097dca76bc45a5cb4380cecb18dcd96395e53b60b0ecd2e1d4b02df: Status 404 returned error can't find the container with id 211594d76097dca76bc45a5cb4380cecb18dcd96395e53b60b0ecd2e1d4b02df Mar 14 08:56:56 crc kubenswrapper[4687]: W0314 08:56:56.844297 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:56 crc kubenswrapper[4687]: E0314 08:56:56.844417 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:57 crc kubenswrapper[4687]: W0314 08:56:57.265560 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:57 crc kubenswrapper[4687]: E0314 08:56:57.265633 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:57 crc kubenswrapper[4687]: E0314 08:56:57.477653 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.479572 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.481046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.481096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.481106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.481128 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:56:57 crc kubenswrapper[4687]: E0314 08:56:57.481651 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.677233 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.723619 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:56:57 crc kubenswrapper[4687]: E0314 08:56:57.724984 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.742207 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784" exitCode=0 Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.742275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.742572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec97843d882b182a8148ed0f7dde04a782d65c686d740c459a584cd7d21ae83d"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.742667 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.743305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.743350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.743358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.744727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.744759 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.744772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d80df41ee43fbd3e54eed944a11d2f5886a76474248e8913e74009d3c090860c"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.745048 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.745849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.745916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.745941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.746876 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306" exitCode=0 Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.746939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.746959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d83e80a30db86d7e1f52074666aaaebdb953cfacdf0f751cb3913e8f02772782"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.747051 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.747575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.747591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.747603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.748282 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20" exitCode=0 Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.748322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.748395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"211594d76097dca76bc45a5cb4380cecb18dcd96395e53b60b0ecd2e1d4b02df"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.748504 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749368 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749870 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987" exitCode=0 Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fec3f9e252ebee54ea09bd1a939990a6d8655bedec12391d7677314d3d9171d1"} Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.749972 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.750673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.750700 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:57 crc kubenswrapper[4687]: I0314 08:56:57.750711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:57 crc kubenswrapper[4687]: E0314 08:56:57.786807 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:56:58 crc kubenswrapper[4687]: W0314 08:56:58.438618 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:58 crc kubenswrapper[4687]: E0314 08:56:58.438717 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.677754 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.753075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.753120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.753222 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.754444 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.754474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.754485 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.756128 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0" exitCode=0 Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.756165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.756282 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.764179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.764218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.764243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.766100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.766157 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.766174 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.766317 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.768513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.768544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.768556 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.770093 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.770168 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.770963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.770984 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.770993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.773049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.773075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.773085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88"} Mar 14 08:56:58 crc kubenswrapper[4687]: I0314 08:56:58.773096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe"} Mar 14 08:56:58 crc kubenswrapper[4687]: W0314 08:56:58.800524 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Mar 14 08:56:58 crc kubenswrapper[4687]: E0314 08:56:58.800620 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:56:59 crc kubenswrapper[4687]: E0314 08:56:59.079215 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.082249 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.085009 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.085068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.085080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.085116 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:56:59 crc kubenswrapper[4687]: E0314 08:56:59.085712 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.419749 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.778853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f39ac2fd2fd874f2dc8c8a31a08e326525b9fbc6047ccf7d5370637a4586c117"} Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.778900 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.779908 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.779940 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.779951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782200 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc" exitCode=0 Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782285 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782317 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782329 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc"} Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782456 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.782574 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.783795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:56:59 crc kubenswrapper[4687]: I0314 08:56:59.791036 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.771367 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789315 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789839 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9"} Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789872 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19"} Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789886 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6"} Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524"} Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6"} Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.789997 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790386 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790409 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.790972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.791430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.791450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:00 crc kubenswrapper[4687]: I0314 08:57:00.791460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.123949 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.577101 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.587396 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.676192 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.791203 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.791232 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.793422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:01 crc kubenswrapper[4687]: I0314 08:57:01.904631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.033314 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.191512 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.191815 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.193502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.193540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.193554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.286689 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.288721 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.288769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.288782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.288819 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.420143 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.420306 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.795305 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.795319 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.796919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.796969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.797035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.797054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.796986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:02 crc kubenswrapper[4687]: I0314 08:57:02.797137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:03 crc kubenswrapper[4687]: I0314 08:57:03.797838 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:03 crc kubenswrapper[4687]: I0314 08:57:03.798717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:03 crc kubenswrapper[4687]: I0314 08:57:03.798754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:03 crc kubenswrapper[4687]: I0314 08:57:03.798762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:05 crc kubenswrapper[4687]: E0314 08:57:05.793006 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:05 crc kubenswrapper[4687]: I0314 08:57:05.986678 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:05 crc kubenswrapper[4687]: I0314 08:57:05.986885 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:05 crc kubenswrapper[4687]: I0314 08:57:05.988042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:05 crc kubenswrapper[4687]: I0314 08:57:05.988088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:05 crc kubenswrapper[4687]: I0314 08:57:05.988100 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:06 crc kubenswrapper[4687]: I0314 08:57:06.676853 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 14 08:57:06 crc kubenswrapper[4687]: I0314 08:57:06.677005 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:06 crc kubenswrapper[4687]: I0314 08:57:06.678078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:06 crc kubenswrapper[4687]: I0314 08:57:06.678153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:06 crc kubenswrapper[4687]: I0314 08:57:06.678169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:09 crc kubenswrapper[4687]: W0314 08:57:09.485836 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.485949 4687 trace.go:236] Trace[241946929]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Mar-2026 08:56:59.484) (total time: 10001ms): Mar 14 08:57:09 crc kubenswrapper[4687]: Trace[241946929]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:57:09.485) Mar 14 08:57:09 crc kubenswrapper[4687]: Trace[241946929]: [10.001353173s] [10.001353173s] END Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.486015 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.594709 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.598201 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:09 crc kubenswrapper[4687]: W0314 08:57:09.598955 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.599028 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:09 crc kubenswrapper[4687]: W0314 08:57:09.601350 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.601397 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.604813 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.608074 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.608745 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.608821 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.614654 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.614710 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.616022 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z Mar 14 08:57:09 crc kubenswrapper[4687]: W0314 08:57:09.625818 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z Mar 14 08:57:09 crc kubenswrapper[4687]: E0314 08:57:09.625904 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.679458 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:09Z is after 2026-02-23T05:33:13Z Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.815192 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.816929 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f39ac2fd2fd874f2dc8c8a31a08e326525b9fbc6047ccf7d5370637a4586c117" exitCode=255 Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.816984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f39ac2fd2fd874f2dc8c8a31a08e326525b9fbc6047ccf7d5370637a4586c117"} Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.817126 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.820309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.820374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.820387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:09 crc kubenswrapper[4687]: I0314 08:57:09.821019 4687 scope.go:117] "RemoveContainer" containerID="f39ac2fd2fd874f2dc8c8a31a08e326525b9fbc6047ccf7d5370637a4586c117" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.685801 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:10Z is after 2026-02-23T05:33:13Z Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.777689 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]log ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]etcd ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/generic-apiserver-start-informers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-filter ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-apiextensions-informers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-apiextensions-controllers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/crd-informer-synced ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-system-namespaces-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/bootstrap-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/start-kube-aggregator-informers ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-registration-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-discovery-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]autoregister-completion ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-openapi-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 14 08:57:10 crc kubenswrapper[4687]: livez check failed Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.777755 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.820718 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.821189 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.822661 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" exitCode=255 Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.822698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628"} Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.822747 4687 scope.go:117] "RemoveContainer" containerID="f39ac2fd2fd874f2dc8c8a31a08e326525b9fbc6047ccf7d5370637a4586c117" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.822849 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.823820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.823847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.823856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:10 crc kubenswrapper[4687]: I0314 08:57:10.824304 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:10 crc kubenswrapper[4687]: E0314 08:57:10.824471 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.053969 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.124814 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.679578 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:11Z is after 2026-02-23T05:33:13Z Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.826642 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.828557 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.829307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.829361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.829375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.829833 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:11 crc kubenswrapper[4687]: E0314 08:57:11.830005 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.908312 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.908470 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.909375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.909468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:11 crc kubenswrapper[4687]: I0314 08:57:11.909530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.420791 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.420904 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.680177 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:12Z is after 2026-02-23T05:33:13Z Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.831867 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.833516 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.833566 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.833580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:12 crc kubenswrapper[4687]: I0314 08:57:12.834175 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:12 crc kubenswrapper[4687]: E0314 08:57:12.834360 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:13 crc kubenswrapper[4687]: I0314 08:57:13.679951 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:13Z is after 2026-02-23T05:33:13Z Mar 14 08:57:14 crc kubenswrapper[4687]: W0314 08:57:14.074217 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:14Z is after 2026-02-23T05:33:13Z Mar 14 08:57:14 crc kubenswrapper[4687]: E0314 08:57:14.074310 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:14 crc kubenswrapper[4687]: W0314 08:57:14.456786 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:14Z is after 2026-02-23T05:33:13Z Mar 14 08:57:14 crc kubenswrapper[4687]: E0314 08:57:14.456992 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:14 crc kubenswrapper[4687]: I0314 08:57:14.681454 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:14Z is after 2026-02-23T05:33:13Z Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.681798 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:15Z is after 2026-02-23T05:33:13Z Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.780838 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.780998 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.782398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.782476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.782504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.783703 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:15 crc kubenswrapper[4687]: E0314 08:57:15.784028 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.787977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:15 crc kubenswrapper[4687]: E0314 08:57:15.793137 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.841511 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.842662 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.842719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.842744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.843668 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:15 crc kubenswrapper[4687]: E0314 08:57:15.843950 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.995465 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.996798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.996834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.996846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:15 crc kubenswrapper[4687]: I0314 08:57:15.996879 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:15 crc kubenswrapper[4687]: E0314 08:57:15.999512 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:15Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:16 crc kubenswrapper[4687]: E0314 08:57:16.012057 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:16Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.680354 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:16Z is after 2026-02-23T05:33:13Z Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.702413 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.703180 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.704566 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.704607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.704616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.717805 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.843401 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.844442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.844491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:16 crc kubenswrapper[4687]: I0314 08:57:16.844516 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:17 crc kubenswrapper[4687]: W0314 08:57:17.240949 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:17Z is after 2026-02-23T05:33:13Z Mar 14 08:57:17 crc kubenswrapper[4687]: E0314 08:57:17.241028 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:17 crc kubenswrapper[4687]: I0314 08:57:17.679638 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:17Z is after 2026-02-23T05:33:13Z Mar 14 08:57:18 crc kubenswrapper[4687]: I0314 08:57:18.398848 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:18 crc kubenswrapper[4687]: E0314 08:57:18.402006 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:18 crc kubenswrapper[4687]: I0314 08:57:18.679803 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:18Z is after 2026-02-23T05:33:13Z Mar 14 08:57:19 crc kubenswrapper[4687]: W0314 08:57:19.277366 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:19Z is after 2026-02-23T05:33:13Z Mar 14 08:57:19 crc kubenswrapper[4687]: E0314 08:57:19.277439 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:19Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:19 crc kubenswrapper[4687]: E0314 08:57:19.601895 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:19Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:19 crc kubenswrapper[4687]: I0314 08:57:19.679471 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:19Z is after 2026-02-23T05:33:13Z Mar 14 08:57:20 crc kubenswrapper[4687]: I0314 08:57:20.678686 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:20Z is after 2026-02-23T05:33:13Z Mar 14 08:57:21 crc kubenswrapper[4687]: I0314 08:57:21.679414 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:21Z is after 2026-02-23T05:33:13Z Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.420709 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.420772 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.420824 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.420955 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.422420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.422453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.422464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.422909 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.423045 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9" gracePeriod=30 Mar 14 08:57:22 crc kubenswrapper[4687]: W0314 08:57:22.575784 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:22Z is after 2026-02-23T05:33:13Z Mar 14 08:57:22 crc kubenswrapper[4687]: E0314 08:57:22.576218 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:22Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.679408 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:22Z is after 2026-02-23T05:33:13Z Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.864873 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.865204 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9" exitCode=255 Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.865242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9"} Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.865268 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2"} Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.865390 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.866186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.866261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:22 crc kubenswrapper[4687]: I0314 08:57:22.866286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.000551 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.001722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.001771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.001781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.001805 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:23 crc kubenswrapper[4687]: E0314 08:57:23.004603 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:23Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:23 crc kubenswrapper[4687]: E0314 08:57:23.015665 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:23Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:23 crc kubenswrapper[4687]: I0314 08:57:23.679953 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:23Z is after 2026-02-23T05:33:13Z Mar 14 08:57:24 crc kubenswrapper[4687]: W0314 08:57:24.199672 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:24Z is after 2026-02-23T05:33:13Z Mar 14 08:57:24 crc kubenswrapper[4687]: E0314 08:57:24.199739 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:24 crc kubenswrapper[4687]: I0314 08:57:24.680220 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:24Z is after 2026-02-23T05:33:13Z Mar 14 08:57:25 crc kubenswrapper[4687]: I0314 08:57:25.679412 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:25Z is after 2026-02-23T05:33:13Z Mar 14 08:57:25 crc kubenswrapper[4687]: E0314 08:57:25.793211 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:26 crc kubenswrapper[4687]: I0314 08:57:26.681947 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:26Z is after 2026-02-23T05:33:13Z Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.682578 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:27Z is after 2026-02-23T05:33:13Z Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.737095 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.739072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.739146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.739160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4687]: I0314 08:57:27.739985 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.681634 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:28Z is after 2026-02-23T05:33:13Z Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.883054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.884020 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.885406 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" exitCode=255 Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.885459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b"} Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.885538 4687 scope.go:117] "RemoveContainer" containerID="e0e0a4cc88bf4479b45a5a55b3f791e6f7e5244d9d78de6904373a202d1d9628" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.885671 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.886758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.886796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.886808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:28 crc kubenswrapper[4687]: I0314 08:57:28.887510 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:28 crc kubenswrapper[4687]: E0314 08:57:28.887823 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.420435 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.420628 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.421828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.421873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.421885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4687]: E0314 08:57:29.605589 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:29Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.680396 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:29Z is after 2026-02-23T05:33:13Z Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.791954 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.890313 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.893113 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.894210 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.894287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4687]: I0314 08:57:29.894313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.005749 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.007803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.007851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.007869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.007899 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:30 crc kubenswrapper[4687]: E0314 08:57:30.011737 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:30Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:30 crc kubenswrapper[4687]: E0314 08:57:30.018994 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:30Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:30 crc kubenswrapper[4687]: I0314 08:57:30.682266 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:30Z is after 2026-02-23T05:33:13Z Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.054737 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.054906 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.057037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.057074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.057085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.057555 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:31 crc kubenswrapper[4687]: E0314 08:57:31.057736 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.124687 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.680719 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:31Z is after 2026-02-23T05:33:13Z Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.899200 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.900364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.900399 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.900411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:31 crc kubenswrapper[4687]: I0314 08:57:31.901291 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:31 crc kubenswrapper[4687]: E0314 08:57:31.901556 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:32 crc kubenswrapper[4687]: I0314 08:57:32.421042 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:32 crc kubenswrapper[4687]: I0314 08:57:32.421135 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:32 crc kubenswrapper[4687]: I0314 08:57:32.680732 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:32Z is after 2026-02-23T05:33:13Z Mar 14 08:57:33 crc kubenswrapper[4687]: I0314 08:57:33.680065 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:33Z is after 2026-02-23T05:33:13Z Mar 14 08:57:34 crc kubenswrapper[4687]: I0314 08:57:34.682075 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:34Z is after 2026-02-23T05:33:13Z Mar 14 08:57:35 crc kubenswrapper[4687]: I0314 08:57:35.550674 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:35 crc kubenswrapper[4687]: E0314 08:57:35.554449 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:35 crc kubenswrapper[4687]: E0314 08:57:35.555812 4687 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 14 08:57:35 crc kubenswrapper[4687]: I0314 08:57:35.682060 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:35Z is after 2026-02-23T05:33:13Z Mar 14 08:57:35 crc kubenswrapper[4687]: E0314 08:57:35.793550 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:36 crc kubenswrapper[4687]: I0314 08:57:36.681822 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:36Z is after 2026-02-23T05:33:13Z Mar 14 08:57:36 crc kubenswrapper[4687]: W0314 08:57:36.867460 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:36Z is after 2026-02-23T05:33:13Z Mar 14 08:57:36 crc kubenswrapper[4687]: E0314 08:57:36.867559 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.012431 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.014178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.014246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.014270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.014310 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:37 crc kubenswrapper[4687]: E0314 08:57:37.019607 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:37Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:37 crc kubenswrapper[4687]: E0314 08:57:37.023719 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:37Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:37 crc kubenswrapper[4687]: I0314 08:57:37.680085 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:37Z is after 2026-02-23T05:33:13Z Mar 14 08:57:38 crc kubenswrapper[4687]: I0314 08:57:38.679817 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:38Z is after 2026-02-23T05:33:13Z Mar 14 08:57:39 crc kubenswrapper[4687]: E0314 08:57:39.612210 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:39Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:39 crc kubenswrapper[4687]: I0314 08:57:39.680749 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:39Z is after 2026-02-23T05:33:13Z Mar 14 08:57:39 crc kubenswrapper[4687]: W0314 08:57:39.687477 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:39Z is after 2026-02-23T05:33:13Z Mar 14 08:57:39 crc kubenswrapper[4687]: E0314 08:57:39.687536 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:40 crc kubenswrapper[4687]: W0314 08:57:40.294176 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:40Z is after 2026-02-23T05:33:13Z Mar 14 08:57:40 crc kubenswrapper[4687]: E0314 08:57:40.294272 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:40 crc kubenswrapper[4687]: I0314 08:57:40.681689 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:40Z is after 2026-02-23T05:33:13Z Mar 14 08:57:41 crc kubenswrapper[4687]: I0314 08:57:41.680925 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:42 crc kubenswrapper[4687]: I0314 08:57:42.420679 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:42 crc kubenswrapper[4687]: I0314 08:57:42.422266 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:42 crc kubenswrapper[4687]: I0314 08:57:42.680041 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:42Z is after 2026-02-23T05:33:13Z Mar 14 08:57:43 crc kubenswrapper[4687]: I0314 08:57:43.679671 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:43Z is after 2026-02-23T05:33:13Z Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.020563 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.021904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.022011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.022079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.022157 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:44 crc kubenswrapper[4687]: E0314 08:57:44.025293 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:44Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:44 crc kubenswrapper[4687]: E0314 08:57:44.026883 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:44Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:44 crc kubenswrapper[4687]: I0314 08:57:44.678789 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:44Z is after 2026-02-23T05:33:13Z Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.682250 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:45Z is after 2026-02-23T05:33:13Z Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.736124 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.737330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.737397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.737414 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.738095 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:45 crc kubenswrapper[4687]: E0314 08:57:45.738293 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:45 crc kubenswrapper[4687]: E0314 08:57:45.794023 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.991707 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.991882 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.993465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.993530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:45 crc kubenswrapper[4687]: I0314 08:57:45.993589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:46 crc kubenswrapper[4687]: I0314 08:57:46.682411 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z Mar 14 08:57:47 crc kubenswrapper[4687]: W0314 08:57:47.156664 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z Mar 14 08:57:47 crc kubenswrapper[4687]: E0314 08:57:47.156736 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:47 crc kubenswrapper[4687]: I0314 08:57:47.679870 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z Mar 14 08:57:48 crc kubenswrapper[4687]: I0314 08:57:48.680530 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:48Z is after 2026-02-23T05:33:13Z Mar 14 08:57:49 crc kubenswrapper[4687]: E0314 08:57:49.616413 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:49Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:49 crc kubenswrapper[4687]: I0314 08:57:49.681313 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:49Z is after 2026-02-23T05:33:13Z Mar 14 08:57:50 crc kubenswrapper[4687]: I0314 08:57:50.680795 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:50Z is after 2026-02-23T05:33:13Z Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.025979 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.027211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.027248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.027261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.027296 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:51 crc kubenswrapper[4687]: E0314 08:57:51.029643 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:51 crc kubenswrapper[4687]: E0314 08:57:51.030203 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:51 crc kubenswrapper[4687]: I0314 08:57:51.680361 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.421171 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.421307 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.421452 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.421650 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.422981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.423013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.423024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.423539 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.423643 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2" gracePeriod=30 Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.679833 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.950940 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952155 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952602 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2" exitCode=255 Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2"} Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095"} Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952712 4687 scope.go:117] "RemoveContainer" containerID="8b69e1759d9e3da6c8f75d214dab9f2b47a4672bcc7c6574ead3d657b1746ec9" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.952850 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.953832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.953857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:52 crc kubenswrapper[4687]: I0314 08:57:52.953866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:53 crc kubenswrapper[4687]: I0314 08:57:53.679810 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:53Z is after 2026-02-23T05:33:13Z Mar 14 08:57:53 crc kubenswrapper[4687]: I0314 08:57:53.959806 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 08:57:54 crc kubenswrapper[4687]: I0314 08:57:54.684286 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:54Z is after 2026-02-23T05:33:13Z Mar 14 08:57:55 crc kubenswrapper[4687]: I0314 08:57:55.681623 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:57:55 crc kubenswrapper[4687]: E0314 08:57:55.795105 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:56 crc kubenswrapper[4687]: I0314 08:57:56.680376 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:57:57 crc kubenswrapper[4687]: I0314 08:57:57.681453 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.030746 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.032192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.032233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.032242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.032263 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:58 crc kubenswrapper[4687]: E0314 08:57:58.035846 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:57:58 crc kubenswrapper[4687]: E0314 08:57:58.036151 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.680928 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.736633 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.737772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.737828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.737842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.738544 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:58 crc kubenswrapper[4687]: I0314 08:57:58.973452 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.419921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.420085 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.421484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.421536 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.421555 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.621028 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96797f67551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,LastTimestamp:2026-03-14 08:56:55.673369937 +0000 UTC m=+0.661610302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.625422 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.631397 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.641817 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.646318 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679efad67c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.791097468 +0000 UTC m=+0.779337833,LastTimestamp:2026-03-14 08:56:55.791097468 +0000 UTC m=+0.779337833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.651286 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.837810486 +0000 UTC m=+0.826050861,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.657892 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.837830227 +0000 UTC m=+0.826070602,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.661991 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.837838327 +0000 UTC m=+0.826078702,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.666100 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.838866332 +0000 UTC m=+0.827106707,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.670521 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.838901893 +0000 UTC m=+0.827142258,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.674700 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.838910123 +0000 UTC m=+0.827150498,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.678224 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.840921851 +0000 UTC m=+0.829162226,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.679046 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.681516 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.840959131 +0000 UTC m=+0.829199506,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.682568 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.840969812 +0000 UTC m=+0.829210187,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.686987 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.84126704 +0000 UTC m=+0.829507415,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.691290 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.8412809 +0000 UTC m=+0.829521275,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.695426 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.84128901 +0000 UTC m=+0.829529385,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.700046 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.841427613 +0000 UTC m=+0.829667988,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.703995 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.841447794 +0000 UTC m=+0.829688169,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.708712 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.841457624 +0000 UTC m=+0.829697999,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.712517 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.84294979 +0000 UTC m=+0.831190175,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.716290 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.84296386 +0000 UTC m=+0.831204245,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.720358 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4ad3d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4ad3d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712453584 +0000 UTC m=+0.700693959,LastTimestamp:2026-03-14 08:56:55.8429756 +0000 UTC m=+0.831215985,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.724227 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4a6471\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4a6471 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712425073 +0000 UTC m=+0.700665448,LastTimestamp:2026-03-14 08:56:55.843131714 +0000 UTC m=+0.831372089,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.728905 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca9679a4aaea9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca9679a4aaea9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:55.712444073 +0000 UTC m=+0.700684448,LastTimestamp:2026-03-14 08:56:55.843142944 +0000 UTC m=+0.831383319,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.734058 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca967d79f21d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:56.741388754 +0000 UTC m=+1.729629129,LastTimestamp:2026-03-14 08:56:56.741388754 +0000 UTC m=+1.729629129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.738227 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca967d7a5665a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:56.741799514 +0000 UTC m=+1.730039909,LastTimestamp:2026-03-14 08:56:56.741799514 +0000 UTC m=+1.730039909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.743605 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca967d7abd832 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:56.742221874 +0000 UTC m=+1.730462249,LastTimestamp:2026-03-14 08:56:56.742221874 +0000 UTC m=+1.730462249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.748126 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca967d839064d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:56.751474253 +0000 UTC m=+1.739714628,LastTimestamp:2026-03-14 08:56:56.751474253 +0000 UTC m=+1.739714628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.752253 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca967d843a94b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:56.752171339 +0000 UTC m=+1.740411704,LastTimestamp:2026-03-14 08:56:56.752171339 +0000 UTC m=+1.740411704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.757551 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca967f78ab2ce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.276920526 +0000 UTC m=+2.265160901,LastTimestamp:2026-03-14 08:56:57.276920526 +0000 UTC m=+2.265160901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.760961 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca967f7963872 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.277675634 +0000 UTC m=+2.265916009,LastTimestamp:2026-03-14 08:56:57.277675634 +0000 UTC m=+2.265916009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.764926 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca967f79ebc57 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.278233687 +0000 UTC m=+2.266474052,LastTimestamp:2026-03-14 08:56:57.278233687 +0000 UTC m=+2.266474052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.771596 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca967f7a2754a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.278477642 +0000 UTC m=+2.266718017,LastTimestamp:2026-03-14 08:56:57.278477642 +0000 UTC m=+2.266718017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.779015 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca967f7a347c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.278531524 +0000 UTC m=+2.266771899,LastTimestamp:2026-03-14 08:56:57.278531524 +0000 UTC m=+2.266771899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.782933 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca967f831d34a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.287873354 +0000 UTC m=+2.276113749,LastTimestamp:2026-03-14 08:56:57.287873354 +0000 UTC m=+2.276113749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.787877 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca967f839c94e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.288395086 +0000 UTC m=+2.276635461,LastTimestamp:2026-03-14 08:56:57.288395086 +0000 UTC m=+2.276635461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.791040 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.792541 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca967f83a1611 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.288414737 +0000 UTC m=+2.276655112,LastTimestamp:2026-03-14 08:56:57.288414737 +0000 UTC m=+2.276655112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.796545 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca967f8524c74 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.290001524 +0000 UTC m=+2.278241899,LastTimestamp:2026-03-14 08:56:57.290001524 +0000 UTC m=+2.278241899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.800438 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca967f857be3d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.290358333 +0000 UTC m=+2.278598708,LastTimestamp:2026-03-14 08:56:57.290358333 +0000 UTC m=+2.278598708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.805755 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca967f863b989 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.291143561 +0000 UTC m=+2.279383926,LastTimestamp:2026-03-14 08:56:57.291143561 +0000 UTC m=+2.279383926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.810160 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9680a12e4dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.587836125 +0000 UTC m=+2.576076500,LastTimestamp:2026-03-14 08:56:57.587836125 +0000 UTC m=+2.576076500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.814390 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9680aad3739 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.597949753 +0000 UTC m=+2.586190128,LastTimestamp:2026-03-14 08:56:57.597949753 +0000 UTC m=+2.586190128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.818177 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9680abfba5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.599162972 +0000 UTC m=+2.587403347,LastTimestamp:2026-03-14 08:56:57.599162972 +0000 UTC m=+2.587403347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.822273 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96813700ddd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.744936413 +0000 UTC m=+2.733176788,LastTimestamp:2026-03-14 08:56:57.744936413 +0000 UTC m=+2.733176788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.826965 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96813ad7718 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.748961048 +0000 UTC m=+2.737201423,LastTimestamp:2026-03-14 08:56:57.748961048 +0000 UTC m=+2.737201423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.831216 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96814069a08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.754802696 +0000 UTC m=+2.743043071,LastTimestamp:2026-03-14 08:56:57.754802696 +0000 UTC m=+2.743043071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.835244 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca9681420299f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.756477855 +0000 UTC m=+2.744718230,LastTimestamp:2026-03-14 08:56:57.756477855 +0000 UTC m=+2.744718230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.839412 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca9681420df63 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.756524387 +0000 UTC m=+2.744764762,LastTimestamp:2026-03-14 08:56:57.756524387 +0000 UTC m=+2.744764762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.843640 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96815698143 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.778061635 +0000 UTC m=+2.766302020,LastTimestamp:2026-03-14 08:56:57.778061635 +0000 UTC m=+2.766302020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.847251 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca968157b67fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.779234813 +0000 UTC m=+2.767475228,LastTimestamp:2026-03-14 08:56:57.779234813 +0000 UTC m=+2.767475228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.851656 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca9681f9c8a1b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.949178395 +0000 UTC m=+2.937418780,LastTimestamp:2026-03-14 08:56:57.949178395 +0000 UTC m=+2.937418780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.855392 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca9681fc77df3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.951993331 +0000 UTC m=+2.940233706,LastTimestamp:2026-03-14 08:56:57.951993331 +0000 UTC m=+2.940233706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.859141 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9681fc87286 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.952055942 +0000 UTC m=+2.940296317,LastTimestamp:2026-03-14 08:56:57.952055942 +0000 UTC m=+2.940296317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.862835 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9681fc9273b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.952102203 +0000 UTC m=+2.940342578,LastTimestamp:2026-03-14 08:56:57.952102203 +0000 UTC m=+2.940342578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.866498 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9681fc96a4e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.952119374 +0000 UTC m=+2.940359749,LastTimestamp:2026-03-14 08:56:57.952119374 +0000 UTC m=+2.940359749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.870494 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96820d5828d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.969689229 +0000 UTC m=+2.957929604,LastTimestamp:2026-03-14 08:56:57.969689229 +0000 UTC m=+2.957929604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.873622 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96820e3468a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.97059137 +0000 UTC m=+2.958831745,LastTimestamp:2026-03-14 08:56:57.97059137 +0000 UTC m=+2.958831745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.875122 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca968213a574e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.976297294 +0000 UTC m=+2.964537669,LastTimestamp:2026-03-14 08:56:57.976297294 +0000 UTC m=+2.964537669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.877844 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca968216fb42d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.979794477 +0000 UTC m=+2.968034852,LastTimestamp:2026-03-14 08:56:57.979794477 +0000 UTC m=+2.968034852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.880959 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9682171e0c1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.979936961 +0000 UTC m=+2.968177336,LastTimestamp:2026-03-14 08:56:57.979936961 +0000 UTC m=+2.968177336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.885579 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96821720529 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.979946281 +0000 UTC m=+2.968186656,LastTimestamp:2026-03-14 08:56:57.979946281 +0000 UTC m=+2.968186656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.889709 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9682188b71a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.981433626 +0000 UTC m=+2.969674001,LastTimestamp:2026-03-14 08:56:57.981433626 +0000 UTC m=+2.969674001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.892800 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca9682c5e7a5a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.163214938 +0000 UTC m=+3.151455313,LastTimestamp:2026-03-14 08:56:58.163214938 +0000 UTC m=+3.151455313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.896327 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9682cc76f78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.170093432 +0000 UTC m=+3.158333807,LastTimestamp:2026-03-14 08:56:58.170093432 +0000 UTC m=+3.158333807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.899591 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca9682d23dee5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.176151269 +0000 UTC m=+3.164391644,LastTimestamp:2026-03-14 08:56:58.176151269 +0000 UTC m=+3.164391644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.904312 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca9682d346cd4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.17723618 +0000 UTC m=+3.165476555,LastTimestamp:2026-03-14 08:56:58.17723618 +0000 UTC m=+3.165476555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.907629 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9682d5ecb5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.180012894 +0000 UTC m=+3.168253269,LastTimestamp:2026-03-14 08:56:58.180012894 +0000 UTC m=+3.168253269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.911178 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9682d72922a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.18130897 +0000 UTC m=+3.169549355,LastTimestamp:2026-03-14 08:56:58.18130897 +0000 UTC m=+3.169549355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.915367 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96839208ac9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.377259721 +0000 UTC m=+3.365500096,LastTimestamp:2026-03-14 08:56:58.377259721 +0000 UTC m=+3.365500096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.921284 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96839213966 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.377304422 +0000 UTC m=+3.365544797,LastTimestamp:2026-03-14 08:56:58.377304422 +0000 UTC m=+3.365544797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.925102 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96839f7dc0d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.391370765 +0000 UTC m=+3.379611140,LastTimestamp:2026-03-14 08:56:58.391370765 +0000 UTC m=+3.379611140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.928438 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9683a359038 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.395414584 +0000 UTC m=+3.383654949,LastTimestamp:2026-03-14 08:56:58.395414584 +0000 UTC m=+3.383654949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.931652 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9683a4781b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.396590517 +0000 UTC m=+3.384830892,LastTimestamp:2026-03-14 08:56:58.396590517 +0000 UTC m=+3.384830892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.935259 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96844da5608 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.573985288 +0000 UTC m=+3.562225663,LastTimestamp:2026-03-14 08:56:58.573985288 +0000 UTC m=+3.562225663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.938936 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96845f58d01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.592546049 +0000 UTC m=+3.580786424,LastTimestamp:2026-03-14 08:56:58.592546049 +0000 UTC m=+3.580786424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.945290 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9684603dc82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.593483906 +0000 UTC m=+3.581724281,LastTimestamp:2026-03-14 08:56:58.593483906 +0000 UTC m=+3.581724281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.950227 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96850488835 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.765756469 +0000 UTC m=+3.753996844,LastTimestamp:2026-03-14 08:56:58.765756469 +0000 UTC m=+3.753996844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.956100 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96850c2e7cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.773776335 +0000 UTC m=+3.762016710,LastTimestamp:2026-03-14 08:56:58.773776335 +0000 UTC m=+3.762016710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.961044 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96851adf8d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.789181654 +0000 UTC m=+3.777422029,LastTimestamp:2026-03-14 08:56:58.789181654 +0000 UTC m=+3.777422029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.966455 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9685b90427f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.955006591 +0000 UTC m=+3.943246966,LastTimestamp:2026-03-14 08:56:58.955006591 +0000 UTC m=+3.943246966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.970822 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9685c607684 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.968651396 +0000 UTC m=+3.956891771,LastTimestamp:2026-03-14 08:56:58.968651396 +0000 UTC m=+3.956891771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.976734 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9688d0505d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:59.78474236 +0000 UTC m=+4.772982735,LastTimestamp:2026-03-14 08:56:59.78474236 +0000 UTC m=+4.772982735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.979413 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.979931 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.980713 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96896d86531 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:59.949589809 +0000 UTC m=+4.937830194,LastTimestamp:2026-03-14 08:56:59.949589809 +0000 UTC m=+4.937830194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.982452 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" exitCode=255 Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.982594 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.983059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1"} Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.983105 4687 scope.go:117] "RemoveContainer" containerID="b7e0764e5a7962eb888accfc908863d77f1980b86161fabebb731cf5e0b19c8b" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.983220 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:59 crc kubenswrapper[4687]: I0314 08:57:59.984576 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.984781 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.986009 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968975a857a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:59.958117754 +0000 UTC m=+4.946358149,LastTimestamp:2026-03-14 08:56:59.958117754 +0000 UTC m=+4.946358149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.990552 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968976e32af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:59.959407279 +0000 UTC m=+4.947647674,LastTimestamp:2026-03-14 08:56:59.959407279 +0000 UTC m=+4.947647674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.994362 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968a0988d89 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.113177993 +0000 UTC m=+5.101418368,LastTimestamp:2026-03-14 08:57:00.113177993 +0000 UTC m=+5.101418368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:59 crc kubenswrapper[4687]: E0314 08:57:59.998096 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968a11f7aa2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.122020514 +0000 UTC m=+5.110260889,LastTimestamp:2026-03-14 08:57:00.122020514 +0000 UTC m=+5.110260889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.001596 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968a12d1a66 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.122913382 +0000 UTC m=+5.111153757,LastTimestamp:2026-03-14 08:57:00.122913382 +0000 UTC m=+5.111153757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.005385 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968ab001423 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.287734819 +0000 UTC m=+5.275975194,LastTimestamp:2026-03-14 08:57:00.287734819 +0000 UTC m=+5.275975194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.008780 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968ab79f22b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.295721515 +0000 UTC m=+5.283961890,LastTimestamp:2026-03-14 08:57:00.295721515 +0000 UTC m=+5.283961890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.012294 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968ab8560c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.29647072 +0000 UTC m=+5.284711095,LastTimestamp:2026-03-14 08:57:00.29647072 +0000 UTC m=+5.284711095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.015464 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968b7561331 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.494697265 +0000 UTC m=+5.482937640,LastTimestamp:2026-03-14 08:57:00.494697265 +0000 UTC m=+5.482937640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.019128 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968b83f4260 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.509979232 +0000 UTC m=+5.498219597,LastTimestamp:2026-03-14 08:57:00.509979232 +0000 UTC m=+5.498219597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.023142 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968b84c0e4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.510817868 +0000 UTC m=+5.499058243,LastTimestamp:2026-03-14 08:57:00.510817868 +0000 UTC m=+5.499058243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.027649 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968c4769944 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.714932548 +0000 UTC m=+5.703172913,LastTimestamp:2026-03-14 08:57:00.714932548 +0000 UTC m=+5.703172913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.031915 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca968c55575dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:00.729538012 +0000 UTC m=+5.717778397,LastTimestamp:2026-03-14 08:57:00.729538012 +0000 UTC m=+5.717778397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.036656 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca9692a1b81e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:00 crc kubenswrapper[4687]: body: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:02.420238825 +0000 UTC m=+7.408479240,LastTimestamp:2026-03-14 08:57:02.420238825 +0000 UTC m=+7.408479240,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.040162 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9692a1dd9c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:02.420392389 +0000 UTC m=+7.408632804,LastTimestamp:2026-03-14 08:57:02.420392389 +0000 UTC m=+7.408632804,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.044449 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-apiserver-crc.189ca96ad6944007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 08:58:00 crc kubenswrapper[4687]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:58:00 crc kubenswrapper[4687]: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:09.608800263 +0000 UTC m=+14.597040638,LastTimestamp:2026-03-14 08:57:09.608800263 +0000 UTC m=+14.597040638,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.047938 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96ad6950632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:09.608850994 +0000 UTC m=+14.597091369,LastTimestamp:2026-03-14 08:57:09.608850994 +0000 UTC m=+14.597091369,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.051153 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96ad6944007\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-apiserver-crc.189ca96ad6944007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 08:58:00 crc kubenswrapper[4687]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:58:00 crc kubenswrapper[4687]: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:09.608800263 +0000 UTC m=+14.597040638,LastTimestamp:2026-03-14 08:57:09.614692717 +0000 UTC m=+14.602933092,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.054235 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96ad6950632\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96ad6950632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:09.608850994 +0000 UTC m=+14.597091369,LastTimestamp:2026-03-14 08:57:09.614734118 +0000 UTC m=+14.602974493,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.057849 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca9684603dc82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9684603dc82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.593483906 +0000 UTC m=+3.581724281,LastTimestamp:2026-03-14 08:57:09.822117052 +0000 UTC m=+14.810357427,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.061639 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96850c2e7cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96850c2e7cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.773776335 +0000 UTC m=+3.762016710,LastTimestamp:2026-03-14 08:57:09.944844023 +0000 UTC m=+14.933084398,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.065124 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96851adf8d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96851adf8d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:58.789181654 +0000 UTC m=+3.777422029,LastTimestamp:2026-03-14 08:57:09.960433465 +0000 UTC m=+14.948673840,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.069397 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e30d092 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:00 crc kubenswrapper[4687]: body: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420855954 +0000 UTC m=+17.409096349,LastTimestamp:2026-03-14 08:57:12.420855954 +0000 UTC m=+17.409096349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.073265 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e32104e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420937806 +0000 UTC m=+17.409178201,LastTimestamp:2026-03-14 08:57:12.420937806 +0000 UTC m=+17.409178201,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.078619 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96b7e30d092\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e30d092 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:00 crc kubenswrapper[4687]: body: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420855954 +0000 UTC m=+17.409096349,LastTimestamp:2026-03-14 08:57:22.42075617 +0000 UTC m=+27.408996545,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.080223 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96b7e32104e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e32104e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420937806 +0000 UTC m=+17.409178201,LastTimestamp:2026-03-14 08:57:22.42079998 +0000 UTC m=+27.409040355,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.083601 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96dd25ddb22 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:22.423028514 +0000 UTC m=+27.411268899,LastTimestamp:2026-03-14 08:57:22.423028514 +0000 UTC m=+27.411268899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.084667 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca967f863b989\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca967f863b989 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.291143561 +0000 UTC m=+2.279383926,LastTimestamp:2026-03-14 08:57:22.542153335 +0000 UTC m=+27.530393710,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.088191 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9680a12e4dd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9680a12e4dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.587836125 +0000 UTC m=+2.576076500,LastTimestamp:2026-03-14 08:57:22.697667973 +0000 UTC m=+27.685908348,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.091618 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9680aad3739\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9680aad3739 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:56:57.597949753 +0000 UTC m=+2.586190128,LastTimestamp:2026-03-14 08:57:22.705958504 +0000 UTC m=+27.694198879,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.096599 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96b7e30d092\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e30d092 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:00 crc kubenswrapper[4687]: body: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420855954 +0000 UTC m=+17.409096349,LastTimestamp:2026-03-14 08:57:32.421110346 +0000 UTC m=+37.409350731,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.100163 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96b7e32104e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e32104e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420937806 +0000 UTC m=+17.409178201,LastTimestamp:2026-03-14 08:57:32.421171177 +0000 UTC m=+37.409411572,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:00 crc kubenswrapper[4687]: E0314 08:58:00.103942 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96b7e30d092\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca96b7e30d092 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:00 crc kubenswrapper[4687]: body: Mar 14 08:58:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:12.420855954 +0000 UTC m=+17.409096349,LastTimestamp:2026-03-14 08:57:42.422227533 +0000 UTC m=+47.410467928,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:00 crc kubenswrapper[4687]: > Mar 14 08:58:00 crc kubenswrapper[4687]: I0314 08:58:00.681931 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:00 crc kubenswrapper[4687]: I0314 08:58:00.986530 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.054046 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.054404 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.055937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.055983 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.055994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.056612 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:01 crc kubenswrapper[4687]: E0314 08:58:01.056791 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.124746 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.683492 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.990911 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.991741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.991772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.991800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:01 crc kubenswrapper[4687]: I0314 08:58:01.992275 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:01 crc kubenswrapper[4687]: E0314 08:58:01.992441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:02 crc kubenswrapper[4687]: I0314 08:58:02.422694 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:58:02 crc kubenswrapper[4687]: I0314 08:58:02.422850 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:58:02 crc kubenswrapper[4687]: I0314 08:58:02.681465 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:03 crc kubenswrapper[4687]: I0314 08:58:03.680915 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:04 crc kubenswrapper[4687]: I0314 08:58:04.681444 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.036651 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.037642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.037670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.037678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.037698 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:05 crc kubenswrapper[4687]: E0314 08:58:05.041136 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:58:05 crc kubenswrapper[4687]: E0314 08:58:05.041496 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:58:05 crc kubenswrapper[4687]: I0314 08:58:05.681537 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:05 crc kubenswrapper[4687]: E0314 08:58:05.796159 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:06 crc kubenswrapper[4687]: I0314 08:58:06.680270 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:07 crc kubenswrapper[4687]: W0314 08:58:07.319414 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 14 08:58:07 crc kubenswrapper[4687]: E0314 08:58:07.319470 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 14 08:58:07 crc kubenswrapper[4687]: I0314 08:58:07.557441 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:58:07 crc kubenswrapper[4687]: I0314 08:58:07.571128 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 08:58:07 crc kubenswrapper[4687]: I0314 08:58:07.693048 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:08 crc kubenswrapper[4687]: I0314 08:58:08.679945 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.429039 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.429200 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.430313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.430357 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.430368 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.432300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:09 crc kubenswrapper[4687]: I0314 08:58:09.680466 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.006854 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.007633 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.007662 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.007674 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.417420 4687 csr.go:261] certificate signing request csr-rwdsx is approved, waiting to be issued Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.428455 4687 csr.go:257] certificate signing request csr-rwdsx is issued Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.509841 4687 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 14 08:58:10 crc kubenswrapper[4687]: I0314 08:58:10.519665 4687 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 14 08:58:11 crc kubenswrapper[4687]: I0314 08:58:11.430115 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 18:16:22.677155447 +0000 UTC Mar 14 08:58:11 crc kubenswrapper[4687]: I0314 08:58:11.430168 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6825h18m11.246990941s for next certificate rotation Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.041582 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.043055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.043082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.043091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.043228 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.048797 4687 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.049042 4687 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.049062 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.051597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.051621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.051633 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.051649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.052003 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:12Z","lastTransitionTime":"2026-03-14T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.063113 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.069371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.069408 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.069433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.069446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.069455 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:12Z","lastTransitionTime":"2026-03-14T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.078653 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.084462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.084484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.084492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.084506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.084514 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:12Z","lastTransitionTime":"2026-03-14T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.093788 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.104011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.104054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.104066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.104082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:12 crc kubenswrapper[4687]: I0314 08:58:12.104094 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:12Z","lastTransitionTime":"2026-03-14T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.112487 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.112840 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.112924 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.213501 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.314186 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.414786 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.515510 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.616025 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.717175 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.817928 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:12 crc kubenswrapper[4687]: E0314 08:58:12.919183 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.019283 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.119901 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.220807 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.321617 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.421736 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.522574 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.623571 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.724541 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: I0314 08:58:13.736837 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:13 crc kubenswrapper[4687]: I0314 08:58:13.737840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:13 crc kubenswrapper[4687]: I0314 08:58:13.737933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:13 crc kubenswrapper[4687]: I0314 08:58:13.737999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:13 crc kubenswrapper[4687]: I0314 08:58:13.738560 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.738777 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.825436 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:13 crc kubenswrapper[4687]: E0314 08:58:13.926423 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.026968 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.127908 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.228856 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.329209 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.429930 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.530794 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.632423 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.732682 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: I0314 08:58:14.736127 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:14 crc kubenswrapper[4687]: I0314 08:58:14.737544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:14 crc kubenswrapper[4687]: I0314 08:58:14.737571 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:14 crc kubenswrapper[4687]: I0314 08:58:14.737579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.833321 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:14 crc kubenswrapper[4687]: E0314 08:58:14.933593 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.034162 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.135166 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.236147 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.336402 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.437667 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.538674 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.639544 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.740557 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.796956 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.841735 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:15 crc kubenswrapper[4687]: E0314 08:58:15.942880 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.043518 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.144525 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.245022 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.345233 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.445911 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.546061 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.647234 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.747813 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.848276 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:16 crc kubenswrapper[4687]: E0314 08:58:16.949504 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.050288 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.150723 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.251501 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.352289 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.452882 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.553949 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.654672 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.755006 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.855882 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:17 crc kubenswrapper[4687]: E0314 08:58:17.956879 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.057127 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.158140 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: I0314 08:58:18.188717 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.258675 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.359560 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.460778 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.561983 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.663104 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: I0314 08:58:18.736131 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:18 crc kubenswrapper[4687]: I0314 08:58:18.737284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:18 crc kubenswrapper[4687]: I0314 08:58:18.737376 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:18 crc kubenswrapper[4687]: I0314 08:58:18.737394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.764082 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.865041 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4687]: E0314 08:58:18.965825 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.066262 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.166446 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.267208 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.368035 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.469000 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.569807 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.670818 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.771480 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.872558 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:19 crc kubenswrapper[4687]: E0314 08:58:19.973226 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.073551 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.174692 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.275557 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: I0314 08:58:20.306995 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.376628 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.476827 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.577965 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.678287 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.778500 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.879179 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:20 crc kubenswrapper[4687]: E0314 08:58:20.979922 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.081114 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.182223 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.282722 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.383498 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.484487 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.584588 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.684710 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.785455 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: I0314 08:58:21.836993 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.886625 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:21 crc kubenswrapper[4687]: E0314 08:58:21.987728 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.088893 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.168662 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.175180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.175221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.175233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.175255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.175269 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:22Z","lastTransitionTime":"2026-03-14T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.186944 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.191696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.191945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.192095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.192246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.192447 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:22Z","lastTransitionTime":"2026-03-14T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.207511 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.211700 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.211743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.211754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.211770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.211783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:22Z","lastTransitionTime":"2026-03-14T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.223314 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.228232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.228262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.228272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.228302 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:22 crc kubenswrapper[4687]: I0314 08:58:22.228314 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:22Z","lastTransitionTime":"2026-03-14T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.245303 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.245487 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.245514 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.346431 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.446994 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.547814 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.648477 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.749036 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.849822 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:22 crc kubenswrapper[4687]: E0314 08:58:22.950599 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.051369 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.151830 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.252877 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.353845 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.454556 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.555842 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.657432 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.758777 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.860143 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:23 crc kubenswrapper[4687]: E0314 08:58:23.960686 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.060779 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.161639 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.262291 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.363073 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.463703 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.564089 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.665040 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: I0314 08:58:24.736367 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:24 crc kubenswrapper[4687]: I0314 08:58:24.738623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:24 crc kubenswrapper[4687]: I0314 08:58:24.738675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:24 crc kubenswrapper[4687]: I0314 08:58:24.738687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:24 crc kubenswrapper[4687]: I0314 08:58:24.739565 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.739780 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.765675 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.866451 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:24 crc kubenswrapper[4687]: E0314 08:58:24.967031 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.067377 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.168388 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.269111 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.369915 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.471162 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.572441 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.673493 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.774161 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.797685 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.874954 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:25 crc kubenswrapper[4687]: E0314 08:58:25.975688 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.076085 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.177043 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.277614 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.377857 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.478700 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.579663 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.680270 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.781945 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.882074 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:26 crc kubenswrapper[4687]: E0314 08:58:26.982995 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.083283 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.184240 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.285278 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.386313 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.486705 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.587363 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.688328 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.788520 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.889700 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4687]: E0314 08:58:27.990279 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.090848 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.191378 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.292199 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.393134 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.493432 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.593543 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.694285 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.794802 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.895245 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:28 crc kubenswrapper[4687]: E0314 08:58:28.996314 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.096841 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.197497 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.298001 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.398260 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.498656 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.599607 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.700387 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.802054 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:29 crc kubenswrapper[4687]: E0314 08:58:29.902815 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.003814 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.104570 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.205210 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.306172 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.406990 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.507245 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.607934 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.708832 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.809811 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:30 crc kubenswrapper[4687]: E0314 08:58:30.910698 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.011196 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.112626 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.213282 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.314209 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.415174 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.515653 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.616572 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.717807 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.818117 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:31 crc kubenswrapper[4687]: E0314 08:58:31.919508 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.020500 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.120834 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.221843 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.322409 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.371980 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.375509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.375714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.375892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.376067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.376231 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:32Z","lastTransitionTime":"2026-03-14T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.384244 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.387439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.387574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.387638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.387695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.387752 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:32Z","lastTransitionTime":"2026-03-14T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.395306 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.398216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.398372 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.398460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.398782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.398982 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:32Z","lastTransitionTime":"2026-03-14T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.412652 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.416548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.416740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.416801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.416866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:32 crc kubenswrapper[4687]: I0314 08:58:32.416925 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:32Z","lastTransitionTime":"2026-03-14T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.426731 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.427072 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.427147 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.527920 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.628271 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.729497 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.830537 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:32 crc kubenswrapper[4687]: E0314 08:58:32.931372 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.032597 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.133592 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.234146 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.335033 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.435231 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.535941 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.636939 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.737920 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.839101 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:33 crc kubenswrapper[4687]: E0314 08:58:33.939603 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.040097 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.140892 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.241690 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.342640 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.443651 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.543799 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.644799 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.745216 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.846185 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:34 crc kubenswrapper[4687]: E0314 08:58:34.946861 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.047635 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.148280 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.249069 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.349938 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.450703 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.551379 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.651504 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: I0314 08:58:35.735962 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:35 crc kubenswrapper[4687]: I0314 08:58:35.737244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:35 crc kubenswrapper[4687]: I0314 08:58:35.737292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:35 crc kubenswrapper[4687]: I0314 08:58:35.737303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:35 crc kubenswrapper[4687]: I0314 08:58:35.738150 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.738319 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.752052 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.798028 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.852957 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:35 crc kubenswrapper[4687]: E0314 08:58:35.954095 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.054514 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.155486 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.256448 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.356910 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.457815 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.559000 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.659116 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.760567 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.861909 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:36 crc kubenswrapper[4687]: E0314 08:58:36.962150 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.063748 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.164535 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.265755 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.366622 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.467578 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.567959 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.669137 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.769521 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.869700 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:37 crc kubenswrapper[4687]: E0314 08:58:37.970763 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.071118 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.172029 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.272892 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.373921 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.474810 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.575813 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.677358 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.777912 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.878797 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4687]: E0314 08:58:38.979729 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.080384 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.180795 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.281886 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.382932 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.483059 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.583350 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.683451 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.784551 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.884991 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:39 crc kubenswrapper[4687]: E0314 08:58:39.985068 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.085720 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.186623 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.287571 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.388459 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.489319 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.589478 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.689561 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.790079 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.890355 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:40 crc kubenswrapper[4687]: E0314 08:58:40.991160 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.091603 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.191980 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.292552 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.393168 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.493909 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.595003 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.695780 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.796044 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.896251 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:41 crc kubenswrapper[4687]: E0314 08:58:41.997424 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.097835 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.198234 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.298813 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.399585 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.500380 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.600552 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.624748 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.629204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.629252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.629266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.629282 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.629292 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:42Z","lastTransitionTime":"2026-03-14T08:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.638921 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.642210 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.642425 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.642515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.642584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.642648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:42Z","lastTransitionTime":"2026-03-14T08:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.653287 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.656895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.656948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.656960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.656976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.656986 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:42Z","lastTransitionTime":"2026-03-14T08:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.665984 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.669517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.669564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.669573 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.669588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:42 crc kubenswrapper[4687]: I0314 08:58:42.669597 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:42Z","lastTransitionTime":"2026-03-14T08:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.679574 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.679748 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.701692 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.802035 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:42 crc kubenswrapper[4687]: E0314 08:58:42.902834 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.003507 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.103851 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.203938 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.304496 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.405153 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.505895 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.606586 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.707545 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.808644 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:43 crc kubenswrapper[4687]: E0314 08:58:43.909783 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.010767 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.111620 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.212403 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.312973 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.413927 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.514578 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.615393 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.716699 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.817542 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:44 crc kubenswrapper[4687]: E0314 08:58:44.918629 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.019561 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.119744 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.220485 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.321501 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.422261 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.523347 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.624295 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.725090 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.799412 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.825916 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:45 crc kubenswrapper[4687]: E0314 08:58:45.926940 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.027598 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.128052 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.229221 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.330050 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.430681 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.531538 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.631887 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.732356 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: I0314 08:58:46.736744 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:46 crc kubenswrapper[4687]: I0314 08:58:46.737818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:46 crc kubenswrapper[4687]: I0314 08:58:46.737861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:46 crc kubenswrapper[4687]: I0314 08:58:46.737879 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:46 crc kubenswrapper[4687]: I0314 08:58:46.738890 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.833392 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:46 crc kubenswrapper[4687]: E0314 08:58:46.933972 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.034823 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.096860 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.098899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925"} Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.099114 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.099952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.100113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:47 crc kubenswrapper[4687]: I0314 08:58:47.100200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.135973 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.236292 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.336799 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.437528 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.537802 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.638543 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.739438 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.840500 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:47 crc kubenswrapper[4687]: E0314 08:58:47.941643 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.042435 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.142744 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.243473 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.344087 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.444687 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.545619 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.645983 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.746388 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.847093 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:48 crc kubenswrapper[4687]: E0314 08:58:48.947403 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.048224 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.104553 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/4.log" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.105048 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.106759 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" exitCode=255 Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.106797 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925"} Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.106832 4687 scope.go:117] "RemoveContainer" containerID="47642b8de36af2465d2dd71efc778de5bae4d89274d20b410100df65bb9457c1" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.106953 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.107767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.107797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.107806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:49 crc kubenswrapper[4687]: I0314 08:58:49.108304 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.108472 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.149299 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.249952 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.350634 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.451614 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.552590 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.653395 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.753559 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.854007 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:49 crc kubenswrapper[4687]: E0314 08:58:49.954928 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.056020 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: I0314 08:58:50.112616 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/4.log" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.156176 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.256287 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.356951 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.457632 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.558195 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.658860 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.759677 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.860106 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4687]: E0314 08:58:50.960889 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.053816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.053997 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.054954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.054981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.054991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.055451 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.055599 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.061409 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.124932 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.125118 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.126076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.126103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.126113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:51 crc kubenswrapper[4687]: I0314 08:58:51.126686 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.126844 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.161733 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.262401 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.363059 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.464099 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.564981 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.666103 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.767015 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.867524 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4687]: E0314 08:58:51.967689 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.068525 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.168684 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.268905 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.369515 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.470362 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.570541 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.671654 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.772069 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.872660 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.935371 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.941712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.941750 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.941758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.941772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.941781 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:52Z","lastTransitionTime":"2026-03-14T08:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.950551 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.953494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.953531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.953542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.953557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.953568 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:52Z","lastTransitionTime":"2026-03-14T08:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.962675 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.966014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.966057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.966068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.966083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.966094 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:52Z","lastTransitionTime":"2026-03-14T08:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.976772 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.979795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.979825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.979833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.979845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:52 crc kubenswrapper[4687]: I0314 08:58:52.979854 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:52Z","lastTransitionTime":"2026-03-14T08:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.988811 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.989078 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:52 crc kubenswrapper[4687]: E0314 08:58:52.989143 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.063491 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.091540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.091586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.091597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.091615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.091634 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.193432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.193473 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.193484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.193498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.193507 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.295557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.295592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.295600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.295614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.295623 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.398264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.398324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.398384 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.398405 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.398420 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.500308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.500378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.500389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.500406 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.500439 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.603650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.603702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.603711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.603723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.603732 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.706264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.706299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.706309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.706324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.706350 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.746688 4687 apiserver.go:52] "Watching apiserver" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.751102 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.751473 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zvbgm","openshift-machine-config-operator/machine-config-daemon-s5gw5","openshift-multus/multus-additional-cni-plugins-7qc4m","openshift-multus/multus-xjjs4","openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-bj9jt","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-jkcr7"] Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.751796 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.751810 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.751882 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.751906 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.752610 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.752659 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.752784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.752163 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.752855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.752996 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.753538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.753941 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.753998 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.754170 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.754221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.755410 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.756001 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.756438 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.756518 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.756598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.756820 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.757040 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.757115 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.757217 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.757047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.757923 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.758353 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.758487 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.760715 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.760874 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761009 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761151 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761404 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761581 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761714 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.761843 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762014 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762200 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762407 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762572 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762743 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.762917 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.763087 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.763236 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.763445 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.763658 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.763784 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.764392 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.764652 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.768051 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.778422 4687 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.779158 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.789070 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796847 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796887 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796939 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.796985 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797047 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797068 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797141 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797163 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797184 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797271 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797313 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797364 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797412 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797438 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797546 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797575 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797669 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797785 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797802 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797794 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797834 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797851 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797868 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797903 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797929 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797953 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797963 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797978 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.797996 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798046 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798086 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798108 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798112 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798124 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798140 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798157 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798189 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798222 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798238 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798252 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798281 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798302 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798323 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798374 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798814 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798833 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798864 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798885 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798902 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798940 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798957 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798975 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798995 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799026 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799060 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799156 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799178 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799235 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799320 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799360 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799409 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799522 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799567 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799623 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799805 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799828 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799858 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799924 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799940 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799957 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799992 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800008 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800026 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800094 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800116 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800156 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800192 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800210 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800229 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800245 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800261 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800299 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800326 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800372 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800420 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800458 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800474 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800507 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800543 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800561 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800577 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800629 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800647 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800734 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800792 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800811 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800880 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800895 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800912 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801003 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801047 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801067 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801091 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801117 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801140 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801161 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801201 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801222 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801266 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801317 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801407 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801425 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801443 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801461 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801244 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798515 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.798843 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799102 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.799915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800006 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800109 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800147 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800229 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800582 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800773 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.800793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801008 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801054 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801291 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801344 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801399 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801422 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.801911 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:58:54.301894369 +0000 UTC m=+119.290134744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.801984 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802092 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802423 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802444 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802575 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802597 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802682 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.802762 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.803914 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804091 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804311 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804546 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804997 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.804050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805072 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806083 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806255 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806270 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806211 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805433 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.805812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806718 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.806927 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807092 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807148 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807164 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807391 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807366 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807435 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807460 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807559 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807607 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807626 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807647 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807767 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1129f889-aeae-45bf-bbdf-e48da879821a-host\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807893 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qd29\" (UniqueName: \"kubernetes.io/projected/1129f889-aeae-45bf-bbdf-e48da879821a-kube-api-access-9qd29\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807911 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-cnibin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-conf-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-bin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskww\" (UniqueName: \"kubernetes.io/projected/f5b9a325-6445-4634-88e1-3a617c091991-kube-api-access-dskww\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808002 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-os-release\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808049 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-system-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808150 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808251 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-multus-daemon-config\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-netns\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808391 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808439 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-system-cni-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808457 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-os-release\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808538 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wqr\" (UniqueName: \"kubernetes.io/projected/c28f39ed-17ae-4d24-9fa5-cea877046b6f-kube-api-access-89wqr\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808644 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808788 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808776 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808815 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807684 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808720 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-multus\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b9a325-6445-4634-88e1-3a617c091991-hosts-file\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.808982 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-multus-certs\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809058 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809092 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xkp\" (UniqueName: \"kubernetes.io/projected/732cd580-e685-4b88-b227-b113c4be4c55-kube-api-access-c6xkp\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.807991 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809219 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809308 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809408 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810128 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810237 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810408 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.809930 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-k8s-cni-cncf-io\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810608 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c28f39ed-17ae-4d24-9fa5-cea877046b6f-rootfs\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cnibin\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810695 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810720 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.810801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811363 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c28f39ed-17ae-4d24-9fa5-cea877046b6f-proxy-tls\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811393 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811416 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c28f39ed-17ae-4d24-9fa5-cea877046b6f-mcd-auth-proxy-config\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811663 4687 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811824 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.811907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.812050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.812082 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.812501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.812720 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.812926 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813532 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813609 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813620 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.813880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.814015 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.814517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.814886 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.814945 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.815108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.815854 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816735 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62c4j\" (UniqueName: \"kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816872 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1129f889-aeae-45bf-bbdf-e48da879821a-serviceca\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816897 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-etc-kubernetes\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.816978 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.816980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817038 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817094 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817121 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817127 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvw9\" (UniqueName: \"kubernetes.io/projected/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-kube-api-access-srvw9\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-cni-binary-copy\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817178 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-hostroot\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817348 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-socket-dir-parent\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-kubelet\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.817349 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.817441 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:54.317417465 +0000 UTC m=+119.305657900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817487 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.817561 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:54.317551388 +0000 UTC m=+119.305791843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817966 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818049 4687 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818088 4687 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818106 4687 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818118 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818131 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818144 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818158 4687 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818173 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818184 4687 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818198 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818217 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818230 4687 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818243 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818256 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818268 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818281 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818294 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818307 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818321 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818350 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818365 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818383 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.817956 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818532 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818547 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818560 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818572 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818584 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818805 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818879 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818986 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.818993 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819006 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819018 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819030 4687 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819044 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819056 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819069 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819080 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819092 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819103 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819115 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819127 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819139 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819151 4687 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819164 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819176 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819188 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819201 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819213 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819226 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819237 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819249 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819260 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819272 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819285 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819296 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819307 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819319 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819348 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819361 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819372 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819383 4687 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819400 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819413 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819413 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819425 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819436 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819449 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819462 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819473 4687 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819484 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819496 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819508 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819521 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819535 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819548 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819562 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819584 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819596 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819607 4687 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819619 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819631 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819547 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819650 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819667 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819679 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819701 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819718 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819736 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819750 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819760 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819771 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819783 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819794 4687 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819804 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819816 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819830 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819842 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819854 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819947 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819979 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.819983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.820201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.820223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.820699 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.820813 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.821374 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.821394 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.821879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.821946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.822042 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.822071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.822595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.826971 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.826988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.831915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.832461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.833143 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.834818 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.834857 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.834886 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.834971 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:54.334948659 +0000 UTC m=+119.323189224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.834978 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.836450 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.836637 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.836913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.836970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.837820 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.838036 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.838222 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.838298 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.838545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.838629 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839004 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839048 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839631 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839757 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.839845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.839989 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.841406 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.841480 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.841541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.840088 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.841620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.841513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.840504 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.841774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.841837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842716 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842900 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842949 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.842980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: E0314 08:58:53.843144 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:54.343124556 +0000 UTC m=+119.331365011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843528 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843762 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.844206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843851 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843909 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843813 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.843981 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.844167 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.844366 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.844526 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.845754 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.846774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.847029 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.847600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.847887 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.849409 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.853121 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.859894 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.862926 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.867615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.870173 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.870615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.878360 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.891171 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.904169 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.912098 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919748 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919757 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:53Z","lastTransitionTime":"2026-03-14T08:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.919813 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920140 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1129f889-aeae-45bf-bbdf-e48da879821a-host\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qd29\" (UniqueName: \"kubernetes.io/projected/1129f889-aeae-45bf-bbdf-e48da879821a-kube-api-access-9qd29\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-cnibin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1129f889-aeae-45bf-bbdf-e48da879821a-host\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920204 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-conf-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-bin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskww\" (UniqueName: \"kubernetes.io/projected/f5b9a325-6445-4634-88e1-3a617c091991-kube-api-access-dskww\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-os-release\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-system-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920294 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-multus-daemon-config\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-system-cni-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-os-release\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920352 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-netns\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-multus\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920417 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wqr\" (UniqueName: \"kubernetes.io/projected/c28f39ed-17ae-4d24-9fa5-cea877046b6f-kube-api-access-89wqr\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-os-release\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920448 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920462 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-conf-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b9a325-6445-4634-88e1-3a617c091991-hosts-file\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-bin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-multus-certs\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xkp\" (UniqueName: \"kubernetes.io/projected/732cd580-e685-4b88-b227-b113c4be4c55-kube-api-access-c6xkp\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-k8s-cni-cncf-io\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920891 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-system-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921420 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-multus-daemon-config\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921464 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-system-cni-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-os-release\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-netns\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-cni-dir\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-cni-multus\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.921967 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.922456 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.922662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-multus-certs\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.922828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.922898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.922928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-run-k8s-cni-cncf-io\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920461 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-cnibin\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.923112 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b9a325-6445-4634-88e1-3a617c091991-hosts-file\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.923275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cnibin\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.920682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cnibin\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c28f39ed-17ae-4d24-9fa5-cea877046b6f-rootfs\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924152 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c28f39ed-17ae-4d24-9fa5-cea877046b6f-proxy-tls\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924207 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c28f39ed-17ae-4d24-9fa5-cea877046b6f-rootfs\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924230 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924276 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c28f39ed-17ae-4d24-9fa5-cea877046b6f-mcd-auth-proxy-config\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924368 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924405 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924418 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62c4j\" (UniqueName: \"kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1129f889-aeae-45bf-bbdf-e48da879821a-serviceca\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-etc-kubernetes\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924322 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924615 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-hostroot\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924636 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvw9\" (UniqueName: \"kubernetes.io/projected/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-kube-api-access-srvw9\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-cni-binary-copy\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-socket-dir-parent\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-kubelet\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924769 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924880 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924899 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924912 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924924 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924936 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924947 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924959 4687 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924970 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924979 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924987 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.924997 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925005 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925014 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925022 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925032 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925041 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925049 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925058 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925066 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925075 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925091 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925095 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-hostroot\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925107 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c28f39ed-17ae-4d24-9fa5-cea877046b6f-mcd-auth-proxy-config\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925132 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925151 4687 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925163 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925177 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925189 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925200 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925212 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925223 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925234 4687 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925245 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925259 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925271 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925286 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925299 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925312 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925325 4687 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925356 4687 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925369 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925381 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925393 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925407 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925418 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925434 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925446 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925458 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925470 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925482 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925494 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925505 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925520 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925536 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925547 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925561 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925572 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925584 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925596 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925605 4687 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925614 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925624 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925632 4687 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925640 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925649 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925657 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925666 4687 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925675 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925684 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925692 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925702 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925710 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925719 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925727 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925735 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925743 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925752 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925763 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925774 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925784 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925792 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925800 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925809 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925817 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/732cd580-e685-4b88-b227-b113c4be4c55-cni-binary-copy\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925827 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930001 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930021 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930034 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930045 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930056 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930065 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930074 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930084 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930093 4687 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930101 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930111 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930121 4687 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930130 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930139 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.930147 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.929960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1129f889-aeae-45bf-bbdf-e48da879821a-serviceca\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.925865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926450 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926746 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-etc-kubernetes\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-host-var-lib-kubelet\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/732cd580-e685-4b88-b227-b113c4be4c55-multus-socket-dir-parent\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.926844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.927064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.927236 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.927284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c28f39ed-17ae-4d24-9fa5-cea877046b6f-proxy-tls\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.933874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.934431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qd29\" (UniqueName: \"kubernetes.io/projected/1129f889-aeae-45bf-bbdf-e48da879821a-kube-api-access-9qd29\") pod \"node-ca-zvbgm\" (UID: \"1129f889-aeae-45bf-bbdf-e48da879821a\") " pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.935732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskww\" (UniqueName: \"kubernetes.io/projected/f5b9a325-6445-4634-88e1-3a617c091991-kube-api-access-dskww\") pod \"node-resolver-bj9jt\" (UID: \"f5b9a325-6445-4634-88e1-3a617c091991\") " pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.937367 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xkp\" (UniqueName: \"kubernetes.io/projected/732cd580-e685-4b88-b227-b113c4be4c55-kube-api-access-c6xkp\") pod \"multus-xjjs4\" (UID: \"732cd580-e685-4b88-b227-b113c4be4c55\") " pod="openshift-multus/multus-xjjs4" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.941591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wqr\" (UniqueName: \"kubernetes.io/projected/c28f39ed-17ae-4d24-9fa5-cea877046b6f-kube-api-access-89wqr\") pod \"machine-config-daemon-s5gw5\" (UID: \"c28f39ed-17ae-4d24-9fa5-cea877046b6f\") " pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.943294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62c4j\" (UniqueName: \"kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j\") pod \"ovnkube-node-jkcr7\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:53 crc kubenswrapper[4687]: I0314 08:58:53.944373 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvw9\" (UniqueName: \"kubernetes.io/projected/091ec70d-b63b-49ac-aa9f-eb9937f8bd4e-kube-api-access-srvw9\") pod \"multus-additional-cni-plugins-7qc4m\" (UID: \"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\") " pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.022088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.022156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.022167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.022181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.022209 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.072053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.078414 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.085445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:58:54 crc kubenswrapper[4687]: W0314 08:58:54.088499 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d73ed5512970448da2ff7c87f4049ea25a04e674fb286165ad35cfaf850aa391 WatchSource:0}: Error finding container d73ed5512970448da2ff7c87f4049ea25a04e674fb286165ad35cfaf850aa391: Status 404 returned error can't find the container with id d73ed5512970448da2ff7c87f4049ea25a04e674fb286165ad35cfaf850aa391 Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.092366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zvbgm" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.101316 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" Mar 14 08:58:54 crc kubenswrapper[4687]: W0314 08:58:54.101854 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6c5cbfaabe9ed767fd1de2ad34266f8e5703cbe3b2b8b1e7cf08d42d8c2d83f6 WatchSource:0}: Error finding container 6c5cbfaabe9ed767fd1de2ad34266f8e5703cbe3b2b8b1e7cf08d42d8c2d83f6: Status 404 returned error can't find the container with id 6c5cbfaabe9ed767fd1de2ad34266f8e5703cbe3b2b8b1e7cf08d42d8c2d83f6 Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.110536 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.119229 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bj9jt" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.125259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6c5cbfaabe9ed767fd1de2ad34266f8e5703cbe3b2b8b1e7cf08d42d8c2d83f6"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.126128 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.126165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.126176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.126191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.126202 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.127562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d73ed5512970448da2ff7c87f4049ea25a04e674fb286165ad35cfaf850aa391"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.128212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bc9271fa53c45d8a28b205be33ec39e58a90fb707900b37ecfa3a74438245e11"} Mar 14 08:58:54 crc kubenswrapper[4687]: W0314 08:58:54.129163 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1129f889_aeae_45bf_bbdf_e48da879821a.slice/crio-6fb3d0d427d0573ba86c52454b521a27cc837bb10e3e064d97f20b04a52dff7a WatchSource:0}: Error finding container 6fb3d0d427d0573ba86c52454b521a27cc837bb10e3e064d97f20b04a52dff7a: Status 404 returned error can't find the container with id 6fb3d0d427d0573ba86c52454b521a27cc837bb10e3e064d97f20b04a52dff7a Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.135599 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjjs4" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.161061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:58:54 crc kubenswrapper[4687]: W0314 08:58:54.205537 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7a910c6_8772_4fc8_b557_8ca75235f11c.slice/crio-4c685799dcfe74d56816d99b16d2d841baa188fc6070066260a981d43b13dfb9 WatchSource:0}: Error finding container 4c685799dcfe74d56816d99b16d2d841baa188fc6070066260a981d43b13dfb9: Status 404 returned error can't find the container with id 4c685799dcfe74d56816d99b16d2d841baa188fc6070066260a981d43b13dfb9 Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.232154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.232194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.232207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.232225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.232237 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.332476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.332594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.332644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.332726 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.332703767 +0000 UTC m=+120.320944142 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.332742 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.332787 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.332775009 +0000 UTC m=+120.321015384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.333009 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.333091 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.333071066 +0000 UTC m=+120.321311491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.334562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.334616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.334629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.334679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.334690 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.411945 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd"] Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.412375 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.415201 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.415253 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.425612 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.433456 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.433567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433699 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433735 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433761 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433795 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.433783022 +0000 UTC m=+120.422023397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433804 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433847 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433884 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:54 crc kubenswrapper[4687]: E0314 08:58:54.433926 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.433912685 +0000 UTC m=+120.422153060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.435896 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.437758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.437789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.437799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.437815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.437826 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.451514 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.462192 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.473315 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.483982 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.497507 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.507546 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.517529 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.526974 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.534626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.534694 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.534723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.534810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkt77\" (UniqueName: \"kubernetes.io/projected/b6719a1f-e970-49ec-85c4-df89934fe8e2-kube-api-access-fkt77\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.535279 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.539903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.539963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.539973 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.540010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.540022 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.543614 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.551014 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.635610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkt77\" (UniqueName: \"kubernetes.io/projected/b6719a1f-e970-49ec-85c4-df89934fe8e2-kube-api-access-fkt77\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.635674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.635698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.635726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.636417 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.636482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6719a1f-e970-49ec-85c4-df89934fe8e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641704 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.641754 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.652842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkt77\" (UniqueName: \"kubernetes.io/projected/b6719a1f-e970-49ec-85c4-df89934fe8e2-kube-api-access-fkt77\") pod \"ovnkube-control-plane-749d76644c-lcbcd\" (UID: \"b6719a1f-e970-49ec-85c4-df89934fe8e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.732870 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.743443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.743475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.743483 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.743496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.743506 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: W0314 08:58:54.744322 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6719a1f_e970_49ec_85c4_df89934fe8e2.slice/crio-d6104d8a640449c5a3b7041e12331ffbdac23c861640fbc3aebf0f3518f95c2d WatchSource:0}: Error finding container d6104d8a640449c5a3b7041e12331ffbdac23c861640fbc3aebf0f3518f95c2d: Status 404 returned error can't find the container with id d6104d8a640449c5a3b7041e12331ffbdac23c861640fbc3aebf0f3518f95c2d Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.845784 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.845820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.845831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.845845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.845862 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.947693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.947725 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.947737 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.947754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:54 crc kubenswrapper[4687]: I0314 08:58:54.947766 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:54Z","lastTransitionTime":"2026-03-14T08:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.050408 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.050451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.050461 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.050476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.050492 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.125364 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2xptn"] Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.125713 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.125766 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.132567 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" exitCode=0 Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.132655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.132703 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"4c685799dcfe74d56816d99b16d2d841baa188fc6070066260a981d43b13dfb9"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.134292 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a" exitCode=0 Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.134373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.134422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerStarted","Data":"00e2f92dd3f1aaa6e1083c234923011597fded7cc091232825bdc462a448def9"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.135772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.137661 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerStarted","Data":"584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.137713 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerStarted","Data":"4b5622dbd9d0e9ff297b17283c714837e74d637e86854a6bc58ba5462f16cf87"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.139077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.139120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.139134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"eade326d24a864a21985610c2488d34bad1a9b3f39eb66d53508d2c2cc5d184e"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.141093 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.141134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.143247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" event={"ID":"b6719a1f-e970-49ec-85c4-df89934fe8e2","Type":"ContainerStarted","Data":"fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.143280 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" event={"ID":"b6719a1f-e970-49ec-85c4-df89934fe8e2","Type":"ContainerStarted","Data":"c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.143295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" event={"ID":"b6719a1f-e970-49ec-85c4-df89934fe8e2","Type":"ContainerStarted","Data":"d6104d8a640449c5a3b7041e12331ffbdac23c861640fbc3aebf0f3518f95c2d"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.144008 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.151536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zvbgm" event={"ID":"1129f889-aeae-45bf-bbdf-e48da879821a","Type":"ContainerStarted","Data":"eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.151584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zvbgm" event={"ID":"1129f889-aeae-45bf-bbdf-e48da879821a","Type":"ContainerStarted","Data":"6fb3d0d427d0573ba86c52454b521a27cc837bb10e3e064d97f20b04a52dff7a"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.153627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.153662 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.153670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.153683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.153692 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.154847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bj9jt" event={"ID":"f5b9a325-6445-4634-88e1-3a617c091991","Type":"ContainerStarted","Data":"acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.154876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bj9jt" event={"ID":"f5b9a325-6445-4634-88e1-3a617c091991","Type":"ContainerStarted","Data":"bb1de52700636f3b1e0c2b8a30a07be9b9edaa998b5ea721b5761cbd1a9f4675"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.168425 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.179848 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.197002 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.207630 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.223718 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.233944 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.241268 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvzr\" (UniqueName: \"kubernetes.io/projected/4aae76c5-5354-43fd-8771-0114216bbf40-kube-api-access-7bvzr\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.241445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.249431 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.255950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.255979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.255987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.256000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.256008 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.260917 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.273526 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.286932 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.307284 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.323098 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.334921 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.343048 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.343255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bvzr\" (UniqueName: \"kubernetes.io/projected/4aae76c5-5354-43fd-8771-0114216bbf40-kube-api-access-7bvzr\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.343288 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.343368 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.343401 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.343571 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.343656 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:57.343631797 +0000 UTC m=+122.331872192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.344042 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:58:57.344029546 +0000 UTC m=+122.332269931 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.344305 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.344369 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:55.844357735 +0000 UTC m=+120.832598130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.344483 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.345852 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:57.345836651 +0000 UTC m=+122.334077036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.348069 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360119 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.360279 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.361263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bvzr\" (UniqueName: \"kubernetes.io/projected/4aae76c5-5354-43fd-8771-0114216bbf40-kube-api-access-7bvzr\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.372610 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.386310 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.400398 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.415398 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.433740 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.444264 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.444310 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444518 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444548 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444519 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444588 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444602 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444566 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444680 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:57.44465366 +0000 UTC m=+122.432894035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.444702 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:57.444693911 +0000 UTC m=+122.432934286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.452057 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.461033 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.462799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.462840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.462849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.462865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.462875 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.475360 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.489994 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.502877 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.515091 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.530898 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.566062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.566101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.566110 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.566127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.566139 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.669000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.669041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.669051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.669068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.669083 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:55Z","lastTransitionTime":"2026-03-14T08:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.736070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.736192 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.736233 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.736482 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.736047 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.736704 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.743629 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.745700 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.746931 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.753562 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.755616 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.755942 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.756934 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.757750 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.758535 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.769468 4687 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.770810 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.771780 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.773247 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.775825 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.776990 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.777862 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.778974 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.779718 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.780393 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.781718 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.782286 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.783080 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.784285 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.785095 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.786299 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.786852 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.788434 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.789036 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.789763 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.791001 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.791368 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.791704 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.794274 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.794922 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.796121 4687 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.796261 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.799393 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.800050 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.800554 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.801780 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.803946 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.804689 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.806063 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.806985 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.807633 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.809614 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.810233 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.810220 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.812666 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.814043 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.815466 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.826033 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.827203 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.827948 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.827994 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.828585 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.829592 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.830074 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.832317 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.833187 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.840225 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.843599 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.849818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.849927 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: E0314 08:58:55.849976 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:56.849963393 +0000 UTC m=+121.838203768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.858178 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.869826 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.881894 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.895280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.907804 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.921586 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.940417 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:55 crc kubenswrapper[4687]: I0314 08:58:55.961449 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.159201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerStarted","Data":"7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3"} Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.162057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.162181 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.162198 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.162208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.176568 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.189547 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.205042 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.222363 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.233141 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.244447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.255863 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.269087 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.284483 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.299466 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.316975 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.325480 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.335252 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.346519 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.736624 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:56 crc kubenswrapper[4687]: E0314 08:58:56.736759 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:58:56 crc kubenswrapper[4687]: I0314 08:58:56.859515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:56 crc kubenswrapper[4687]: E0314 08:58:56.859732 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:56 crc kubenswrapper[4687]: E0314 08:58:56.859825 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:58:58.859808507 +0000 UTC m=+123.848048882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.168282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.168374 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.169901 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3" exitCode=0 Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.169972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3"} Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.171142 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2"} Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.186556 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.203166 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.219967 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.233426 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.246165 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.256820 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.266843 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.277014 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.288928 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.301430 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.318177 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.328041 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.339823 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.351434 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.362463 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.362550 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.362640 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.362613937 +0000 UTC m=+126.350854312 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.362999 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.363041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.363133 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.363168 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.363181 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.36316722 +0000 UTC m=+126.351407595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.363204 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.363196861 +0000 UTC m=+126.351437236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.371507 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.380576 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.388946 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.400196 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.410001 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.420135 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.436613 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.444928 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.453882 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.463954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.463986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.463948 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464101 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464105 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464115 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464123 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464127 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464131 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464168 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.464156963 +0000 UTC m=+126.452397338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.464181 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.464176554 +0000 UTC m=+126.452416929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.476538 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.485658 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.495729 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.736568 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.736638 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:57 crc kubenswrapper[4687]: I0314 08:58:57.736590 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.736751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.736857 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:58:57 crc kubenswrapper[4687]: E0314 08:58:57.736940 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.175667 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307" exitCode=0 Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.175738 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307"} Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.195094 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.211059 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.226522 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.239082 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.254447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.265451 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.277452 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.287557 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.299576 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.311124 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.322209 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.334786 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.352515 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.363577 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.736683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:58 crc kubenswrapper[4687]: E0314 08:58:58.736861 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:58:58 crc kubenswrapper[4687]: I0314 08:58:58.878872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:58:58 crc kubenswrapper[4687]: E0314 08:58:58.879037 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:58 crc kubenswrapper[4687]: E0314 08:58:58.879121 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.879105634 +0000 UTC m=+127.867346009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.182562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.185923 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b" exitCode=0 Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.185962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b"} Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.198499 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.219654 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.231721 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.247388 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.257276 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.267781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.280786 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.290680 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.301529 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.311187 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.321079 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.338857 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.351746 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.368998 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.736467 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.736489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.736523 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:58:59 crc kubenswrapper[4687]: E0314 08:58:59.736869 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:58:59 crc kubenswrapper[4687]: E0314 08:58:59.737052 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:58:59 crc kubenswrapper[4687]: E0314 08:58:59.737119 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:58:59 crc kubenswrapper[4687]: I0314 08:58:59.746950 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.192888 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44" exitCode=0 Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.192968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44"} Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.205786 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.226628 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.245211 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.266913 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.280975 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.293322 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.306974 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.317691 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.330761 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.347585 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.361719 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.373876 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.382744 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.393516 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.402525 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:00 crc kubenswrapper[4687]: I0314 08:59:00.736640 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:00 crc kubenswrapper[4687]: E0314 08:59:00.736794 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:00 crc kubenswrapper[4687]: E0314 08:59:00.811105 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.199454 4687 generic.go:334] "Generic (PLEG): container finished" podID="091ec70d-b63b-49ac-aa9f-eb9937f8bd4e" containerID="d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8" exitCode=0 Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.199541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerDied","Data":"d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8"} Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.204521 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450"} Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.205116 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.205166 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.205185 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.296938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.313167 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.315477 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.316079 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.324078 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.338768 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.349162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.361105 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.373119 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.381797 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.392890 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.405672 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.405805 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:09.405778624 +0000 UTC m=+134.394018999 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.405918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.405954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.406046 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.406070 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.406103 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:09.406085781 +0000 UTC m=+134.394326186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.406124 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:09.406114932 +0000 UTC m=+134.394355387 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.410485 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.418772 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.428970 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.443593 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.457117 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.469835 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.479420 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.490680 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.502750 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.506976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.507027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507166 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507184 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507192 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507208 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507210 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507222 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507270 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:09.507250707 +0000 UTC m=+134.495491082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.507295 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:09.507283588 +0000 UTC m=+134.495524083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.515359 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.525063 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.541862 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.549757 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.558838 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.570717 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.582399 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.593471 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.616113 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.639783 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.658051 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.668633 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:01Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.736912 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.736940 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:01 crc kubenswrapper[4687]: I0314 08:59:01.736940 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.737035 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.737140 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:01 crc kubenswrapper[4687]: E0314 08:59:01.737225 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.210998 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" event={"ID":"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e","Type":"ContainerStarted","Data":"3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b"} Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.231946 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.242402 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.253102 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.264855 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.280392 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.293044 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.304144 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.317129 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.330778 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.340238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.350306 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.363891 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.376832 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.387149 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.398186 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.736730 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:02 crc kubenswrapper[4687]: E0314 08:59:02.736871 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:02 crc kubenswrapper[4687]: I0314 08:59:02.921092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:02 crc kubenswrapper[4687]: E0314 08:59:02.921247 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:02 crc kubenswrapper[4687]: E0314 08:59:02.921292 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:10.921279377 +0000 UTC m=+135.909519752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.214893 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/0.log" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.217450 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450" exitCode=1 Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.217486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.218115 4687 scope.go:117] "RemoveContainer" containerID="aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.233938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.245907 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.259346 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.274514 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.291501 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.012999 6694 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.013746 6694 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.013865 6694 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.014287 6694 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:03.014298 6694 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:03.014310 6694 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:03.014322 6694 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:03.014321 6694 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:03.014326 6694 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:03.014350 6694 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:03.014351 6694 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:03.014359 6694 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:03.014382 6694 factory.go:656] Stopping watch factory\\\\nI0314 08:59:03.014395 6694 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.298454 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.298491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.298501 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.298514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.298523 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.304047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.314306 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.315751 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.318155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.318183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.318192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.318206 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.318217 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.327794 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.331162 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.334899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.334931 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.334941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.334958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.334968 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.344866 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.348211 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.356572 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.357289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.357319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.357350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.357367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.357379 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.369159 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.370492 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.378248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.378281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.378293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.378309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.378322 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.389094 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.391538 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.392101 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.401269 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.411086 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.420657 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:03Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.735900 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.735929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:03 crc kubenswrapper[4687]: I0314 08:59:03.735947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.736058 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.736132 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:03 crc kubenswrapper[4687]: E0314 08:59:03.736227 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.222030 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/1.log" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.223170 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/0.log" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.225798 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917" exitCode=1 Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.225851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917"} Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.225904 4687 scope.go:117] "RemoveContainer" containerID="aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.226544 4687 scope.go:117] "RemoveContainer" containerID="ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917" Mar 14 08:59:04 crc kubenswrapper[4687]: E0314 08:59:04.227778 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.241195 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.252031 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.263073 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.272651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.282191 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.293729 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.306036 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.317965 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.334654 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea41664249e6a78525f4d5b8b8c1a027fea5cd654c4fe074bf30ad426f50450\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:03Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.012999 6694 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.013746 6694 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.013865 6694 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0314 08:59:03.014287 6694 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:03.014298 6694 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:03.014310 6694 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:03.014322 6694 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:03.014321 6694 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:03.014326 6694 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:03.014350 6694 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:03.014351 6694 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:03.014359 6694 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:03.014382 6694 factory.go:656] Stopping watch factory\\\\nI0314 08:59:03.014395 6694 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.345369 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.357011 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.367638 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.378707 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.396680 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.408113 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:04Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.736205 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:04 crc kubenswrapper[4687]: E0314 08:59:04.736546 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.747215 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 08:59:04 crc kubenswrapper[4687]: I0314 08:59:04.747436 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:04 crc kubenswrapper[4687]: E0314 08:59:04.747653 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.229846 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/1.log" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.233321 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.233496 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.233997 4687 scope.go:117] "RemoveContainer" containerID="ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.234223 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.247239 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.261108 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.277586 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.304650 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.317682 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.333670 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.349222 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.365749 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.381109 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.398366 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.411725 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.426408 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.441170 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.452799 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.464138 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.477563 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.736042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.736163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.736294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.736404 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.736415 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.736589 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.753798 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.771067 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.800662 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: E0314 08:59:05.811732 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.815060 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.830622 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.842452 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.857161 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.873210 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.886953 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.901780 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.917409 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.928085 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.938596 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.948875 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.960322 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:05 crc kubenswrapper[4687]: I0314 08:59:05.974396 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:05Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:06 crc kubenswrapper[4687]: I0314 08:59:06.736095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:06 crc kubenswrapper[4687]: E0314 08:59:06.736253 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:07 crc kubenswrapper[4687]: I0314 08:59:07.736176 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:07 crc kubenswrapper[4687]: I0314 08:59:07.736210 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:07 crc kubenswrapper[4687]: I0314 08:59:07.736283 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:07 crc kubenswrapper[4687]: E0314 08:59:07.736372 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:07 crc kubenswrapper[4687]: E0314 08:59:07.736440 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:07 crc kubenswrapper[4687]: E0314 08:59:07.736524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:08 crc kubenswrapper[4687]: I0314 08:59:08.736282 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:08 crc kubenswrapper[4687]: E0314 08:59:08.736530 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.488665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.488904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.488925 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.488889002 +0000 UTC m=+150.477129397 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.489034 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.489206 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.489684 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.489665281 +0000 UTC m=+150.477905666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.489232 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.489944 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.489893928 +0000 UTC m=+150.478134313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.591110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.591181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591361 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591406 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591421 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591461 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591489 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591500 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.591478906 +0000 UTC m=+150.579719281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591505 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.591581 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.591558718 +0000 UTC m=+150.579799303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.736053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.736201 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.736309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:09 crc kubenswrapper[4687]: I0314 08:59:09.736305 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.736586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:09 crc kubenswrapper[4687]: E0314 08:59:09.736772 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:10 crc kubenswrapper[4687]: I0314 08:59:10.736327 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:10 crc kubenswrapper[4687]: E0314 08:59:10.736606 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:10 crc kubenswrapper[4687]: E0314 08:59:10.812813 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:11 crc kubenswrapper[4687]: I0314 08:59:11.008205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:11 crc kubenswrapper[4687]: E0314 08:59:11.008469 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:11 crc kubenswrapper[4687]: E0314 08:59:11.008592 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:27.008566065 +0000 UTC m=+151.996806440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:11 crc kubenswrapper[4687]: I0314 08:59:11.736829 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:11 crc kubenswrapper[4687]: I0314 08:59:11.736896 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:11 crc kubenswrapper[4687]: I0314 08:59:11.736890 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:11 crc kubenswrapper[4687]: E0314 08:59:11.737004 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:11 crc kubenswrapper[4687]: E0314 08:59:11.737223 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:11 crc kubenswrapper[4687]: E0314 08:59:11.737302 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:12 crc kubenswrapper[4687]: I0314 08:59:12.736303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:12 crc kubenswrapper[4687]: E0314 08:59:12.736458 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.578006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.578042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.578052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.578069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.578080 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.591144 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:13Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.595034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.595077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.595090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.595109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.595121 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.605936 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:13Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.609082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.609114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.609126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.609174 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.609195 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.619881 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:13Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.622779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.622808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.622816 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.622829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.622838 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.638566 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:13Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.641710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.641742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.641752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.641766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.641776 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.652709 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:13Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.652823 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.736881 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.736898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.737026 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:13 crc kubenswrapper[4687]: I0314 08:59:13.736993 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.737208 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:13 crc kubenswrapper[4687]: E0314 08:59:13.737125 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:14 crc kubenswrapper[4687]: I0314 08:59:14.736417 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:14 crc kubenswrapper[4687]: E0314 08:59:14.736587 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.736486 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.736558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.736520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:15 crc kubenswrapper[4687]: E0314 08:59:15.736676 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:15 crc kubenswrapper[4687]: E0314 08:59:15.736790 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:15 crc kubenswrapper[4687]: E0314 08:59:15.737207 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.737555 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:15 crc kubenswrapper[4687]: E0314 08:59:15.738327 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.754784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.765464 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.778127 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.789590 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.800355 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.809714 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: E0314 08:59:15.813366 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.826236 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.837206 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.846033 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.859256 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.876563 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.890087 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.901310 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.912809 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.923047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4687]: I0314 08:59:15.933704 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4687]: I0314 08:59:16.736191 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:16 crc kubenswrapper[4687]: E0314 08:59:16.736667 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:16 crc kubenswrapper[4687]: I0314 08:59:16.736868 4687 scope.go:117] "RemoveContainer" containerID="ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.271895 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/1.log" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.275575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e"} Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.276180 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.295086 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.311278 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.338288 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.349380 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.359630 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.372399 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.384416 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.394567 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.404384 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.423598 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.435522 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.447523 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.459695 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.470706 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.480980 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.494447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.736286 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.736296 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:17 crc kubenswrapper[4687]: I0314 08:59:17.736414 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:17 crc kubenswrapper[4687]: E0314 08:59:17.736580 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:17 crc kubenswrapper[4687]: E0314 08:59:17.736920 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:17 crc kubenswrapper[4687]: E0314 08:59:17.737133 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.280248 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/2.log" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.280870 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/1.log" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.283450 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" exitCode=1 Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.283484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e"} Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.283515 4687 scope.go:117] "RemoveContainer" containerID="ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.284105 4687 scope.go:117] "RemoveContainer" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" Mar 14 08:59:18 crc kubenswrapper[4687]: E0314 08:59:18.284273 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.304948 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.320605 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.336918 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.352649 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.368274 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.378085 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.389015 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.402047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.413006 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.421574 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.430201 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.446379 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce1aefa84c5799ea9e5019274a5492195d58787d7e9a7095b4a8e9135f40d917\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:04Z\\\",\\\"message\\\":\\\"andler 7 for removal\\\\nI0314 08:59:04.157278 6821 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:04.157291 6821 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:04.157308 6821 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:04.157323 6821 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:04.157324 6821 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:04.157371 6821 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:04.157548 6821 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 08:59:04.157557 6821 factory.go:656] Stopping watch factory\\\\nI0314 08:59:04.157560 6821 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:04.157572 6821 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:04.157575 6821 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0314 08:59:04.157618 6821 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:04.157687 6821 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:04.157712 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:04.157732 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:04.157783 6821 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.459058 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.469627 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.481083 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.495952 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4687]: I0314 08:59:18.736207 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:18 crc kubenswrapper[4687]: E0314 08:59:18.736362 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.289212 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/2.log" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.294447 4687 scope.go:117] "RemoveContainer" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" Mar 14 08:59:19 crc kubenswrapper[4687]: E0314 08:59:19.294619 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.316853 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.334391 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.353590 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.376281 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.398234 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.416713 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.428140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.442162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.454207 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.471105 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.483282 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.496979 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.513825 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.538427 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.549047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.559176 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.736475 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.736559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:19 crc kubenswrapper[4687]: E0314 08:59:19.736641 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:19 crc kubenswrapper[4687]: E0314 08:59:19.736785 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:19 crc kubenswrapper[4687]: I0314 08:59:19.736489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:19 crc kubenswrapper[4687]: E0314 08:59:19.736897 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:20 crc kubenswrapper[4687]: I0314 08:59:20.736086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:20 crc kubenswrapper[4687]: E0314 08:59:20.736229 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:20 crc kubenswrapper[4687]: E0314 08:59:20.814807 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:21 crc kubenswrapper[4687]: I0314 08:59:21.736361 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:21 crc kubenswrapper[4687]: E0314 08:59:21.736481 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:21 crc kubenswrapper[4687]: I0314 08:59:21.736743 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:21 crc kubenswrapper[4687]: I0314 08:59:21.736766 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:21 crc kubenswrapper[4687]: E0314 08:59:21.736812 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:21 crc kubenswrapper[4687]: E0314 08:59:21.736869 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:22 crc kubenswrapper[4687]: I0314 08:59:22.736373 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:22 crc kubenswrapper[4687]: E0314 08:59:22.737011 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:22 crc kubenswrapper[4687]: I0314 08:59:22.752468 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 14 08:59:22 crc kubenswrapper[4687]: I0314 08:59:22.752536 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.736727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.736798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.736865 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.736926 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.737005 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.737061 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.826418 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.826464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.826478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.826497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.826511 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.841286 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.845305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.845364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.845407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.845423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.845434 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.858278 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.861863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.861886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.861894 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.861908 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.861917 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.874169 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.877797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.877838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.877850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.877866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.877878 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.892517 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.896543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.896582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.896594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.896606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4687]: I0314 08:59:23.896615 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.907280 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4687]: E0314 08:59:23.907407 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:24 crc kubenswrapper[4687]: I0314 08:59:24.736269 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:24 crc kubenswrapper[4687]: E0314 08:59:24.736507 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.549628 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.549770 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:57.549753518 +0000 UTC m=+182.537993893 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.549826 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.549850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.549929 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.549949 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.549968 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:57.549960413 +0000 UTC m=+182.538200788 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.549981 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:57.549974613 +0000 UTC m=+182.538214988 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.650772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.650809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650916 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650938 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650917 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650949 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650961 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650972 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.650998 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:57.650983547 +0000 UTC m=+182.639223922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.651010 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:57.651004958 +0000 UTC m=+182.639245333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.736646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.736689 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.736878 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.736937 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.736989 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.737796 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.749632 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.751728 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.770737 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.782071 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.793500 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.808207 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: E0314 08:59:25.815695 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.829955 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.842219 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.853858 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.866654 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.879277 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.893616 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.904927 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.918327 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.930139 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.942361 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.953744 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.965053 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:25 crc kubenswrapper[4687]: I0314 08:59:25.975275 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:25Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:26 crc kubenswrapper[4687]: I0314 08:59:26.736663 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:26 crc kubenswrapper[4687]: E0314 08:59:26.736811 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:27 crc kubenswrapper[4687]: I0314 08:59:27.065958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:27 crc kubenswrapper[4687]: E0314 08:59:27.066159 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:27 crc kubenswrapper[4687]: E0314 08:59:27.066466 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:59.066438465 +0000 UTC m=+184.054678880 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:27 crc kubenswrapper[4687]: I0314 08:59:27.736042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:27 crc kubenswrapper[4687]: I0314 08:59:27.736142 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:27 crc kubenswrapper[4687]: E0314 08:59:27.736175 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:27 crc kubenswrapper[4687]: I0314 08:59:27.736049 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:27 crc kubenswrapper[4687]: E0314 08:59:27.736278 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:27 crc kubenswrapper[4687]: E0314 08:59:27.736394 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:28 crc kubenswrapper[4687]: I0314 08:59:28.736097 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:28 crc kubenswrapper[4687]: E0314 08:59:28.736214 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:29 crc kubenswrapper[4687]: I0314 08:59:29.736507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:29 crc kubenswrapper[4687]: E0314 08:59:29.737132 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:29 crc kubenswrapper[4687]: I0314 08:59:29.736539 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:29 crc kubenswrapper[4687]: I0314 08:59:29.737266 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:29 crc kubenswrapper[4687]: I0314 08:59:29.736515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:29 crc kubenswrapper[4687]: E0314 08:59:29.737480 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:29 crc kubenswrapper[4687]: E0314 08:59:29.737599 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:29 crc kubenswrapper[4687]: E0314 08:59:29.737661 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:30 crc kubenswrapper[4687]: I0314 08:59:30.736550 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:30 crc kubenswrapper[4687]: E0314 08:59:30.736720 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:30 crc kubenswrapper[4687]: E0314 08:59:30.817264 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:31 crc kubenswrapper[4687]: I0314 08:59:31.736684 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:31 crc kubenswrapper[4687]: I0314 08:59:31.736722 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:31 crc kubenswrapper[4687]: I0314 08:59:31.736696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:31 crc kubenswrapper[4687]: E0314 08:59:31.736884 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:31 crc kubenswrapper[4687]: E0314 08:59:31.736798 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:31 crc kubenswrapper[4687]: E0314 08:59:31.736982 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:32 crc kubenswrapper[4687]: I0314 08:59:32.736607 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:32 crc kubenswrapper[4687]: E0314 08:59:32.736798 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:32 crc kubenswrapper[4687]: I0314 08:59:32.737358 4687 scope.go:117] "RemoveContainer" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" Mar 14 08:59:32 crc kubenswrapper[4687]: E0314 08:59:32.737502 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:33 crc kubenswrapper[4687]: I0314 08:59:33.736023 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:33 crc kubenswrapper[4687]: I0314 08:59:33.736073 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:33 crc kubenswrapper[4687]: I0314 08:59:33.736095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:33 crc kubenswrapper[4687]: E0314 08:59:33.736163 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:33 crc kubenswrapper[4687]: E0314 08:59:33.736272 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:33 crc kubenswrapper[4687]: E0314 08:59:33.736531 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.017912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.017957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.017971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.017987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.017999 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:34Z","lastTransitionTime":"2026-03-14T08:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.033515 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:34Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.037283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.037356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.037376 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.037398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.037415 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:34Z","lastTransitionTime":"2026-03-14T08:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.051530 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:34Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.055360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.055394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.055403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.055415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.055424 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:34Z","lastTransitionTime":"2026-03-14T08:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.072357 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:34Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.076738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.076800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.076818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.076842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.076862 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:34Z","lastTransitionTime":"2026-03-14T08:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.092899 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:34Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.098835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.098873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.098883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.098898 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.098907 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:34Z","lastTransitionTime":"2026-03-14T08:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.118848 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:34Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.119015 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:34 crc kubenswrapper[4687]: I0314 08:59:34.736507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:34 crc kubenswrapper[4687]: E0314 08:59:34.736892 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.736960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.737049 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.737060 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:35 crc kubenswrapper[4687]: E0314 08:59:35.738029 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:35 crc kubenswrapper[4687]: E0314 08:59:35.737817 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:35 crc kubenswrapper[4687]: E0314 08:59:35.738153 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.751686 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.768516 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.786521 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.799600 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: E0314 08:59:35.817862 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.820122 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.832535 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.842248 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.859150 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.872212 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.884253 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.894071 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.905207 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.925397 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.937374 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.959449 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.971800 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.981535 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:35 crc kubenswrapper[4687]: I0314 08:59:35.995575 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:35Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:36 crc kubenswrapper[4687]: I0314 08:59:36.006567 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:36Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:36 crc kubenswrapper[4687]: I0314 08:59:36.736616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:36 crc kubenswrapper[4687]: E0314 08:59:36.736849 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:37 crc kubenswrapper[4687]: I0314 08:59:37.736788 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:37 crc kubenswrapper[4687]: I0314 08:59:37.736825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:37 crc kubenswrapper[4687]: I0314 08:59:37.736867 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:37 crc kubenswrapper[4687]: E0314 08:59:37.736948 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:37 crc kubenswrapper[4687]: E0314 08:59:37.737080 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:37 crc kubenswrapper[4687]: E0314 08:59:37.737182 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:38 crc kubenswrapper[4687]: I0314 08:59:38.736414 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:38 crc kubenswrapper[4687]: E0314 08:59:38.736568 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:39 crc kubenswrapper[4687]: I0314 08:59:39.736648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:39 crc kubenswrapper[4687]: I0314 08:59:39.736711 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:39 crc kubenswrapper[4687]: E0314 08:59:39.736804 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:39 crc kubenswrapper[4687]: I0314 08:59:39.736648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:39 crc kubenswrapper[4687]: E0314 08:59:39.737016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:39 crc kubenswrapper[4687]: E0314 08:59:39.737114 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:40 crc kubenswrapper[4687]: I0314 08:59:40.735823 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:40 crc kubenswrapper[4687]: E0314 08:59:40.736016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:40 crc kubenswrapper[4687]: I0314 08:59:40.736581 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:40 crc kubenswrapper[4687]: E0314 08:59:40.736758 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:40 crc kubenswrapper[4687]: E0314 08:59:40.819682 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.363069 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/0.log" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.363150 4687 generic.go:334] "Generic (PLEG): container finished" podID="732cd580-e685-4b88-b227-b113c4be4c55" containerID="584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff" exitCode=1 Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.363191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerDied","Data":"584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff"} Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.363854 4687 scope.go:117] "RemoveContainer" containerID="584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.380192 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.404686 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.414587 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.431290 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.445032 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.453699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.465423 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.481400 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.493033 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.505413 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.516540 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.547121 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.564113 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.579495 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.591017 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.601892 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.619958 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.632046 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.641109 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:41Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.736880 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.737023 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:41 crc kubenswrapper[4687]: E0314 08:59:41.737182 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:41 crc kubenswrapper[4687]: I0314 08:59:41.737213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:41 crc kubenswrapper[4687]: E0314 08:59:41.737440 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:41 crc kubenswrapper[4687]: E0314 08:59:41.737403 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.369176 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/0.log" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.369251 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerStarted","Data":"f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e"} Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.394905 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.409538 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.426089 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.440856 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.452403 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.463967 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.477563 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.492727 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.507945 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.519591 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.536539 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.555534 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.570811 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.603744 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.618673 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.628782 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.643128 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.654389 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.668511 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4687]: I0314 08:59:42.735884 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:42 crc kubenswrapper[4687]: E0314 08:59:42.736037 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:43 crc kubenswrapper[4687]: I0314 08:59:43.736256 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:43 crc kubenswrapper[4687]: I0314 08:59:43.736256 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:43 crc kubenswrapper[4687]: E0314 08:59:43.736500 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:43 crc kubenswrapper[4687]: E0314 08:59:43.736409 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:43 crc kubenswrapper[4687]: I0314 08:59:43.736277 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:43 crc kubenswrapper[4687]: E0314 08:59:43.736693 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.251002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.251034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.251043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.251057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.251066 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:44Z","lastTransitionTime":"2026-03-14T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.265160 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:44Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.268176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.268222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.268234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.268250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.268262 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:44Z","lastTransitionTime":"2026-03-14T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.280916 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:44Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.284123 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.284155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.284166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.284188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.284198 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:44Z","lastTransitionTime":"2026-03-14T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.294986 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:44Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.298855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.298895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.298906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.298921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.298931 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:44Z","lastTransitionTime":"2026-03-14T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.309756 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:44Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.312685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.312712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.312722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.312734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.312742 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:44Z","lastTransitionTime":"2026-03-14T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.328682 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:44Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.328790 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:44 crc kubenswrapper[4687]: I0314 08:59:44.736177 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:44 crc kubenswrapper[4687]: E0314 08:59:44.736440 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.736735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.736812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:45 crc kubenswrapper[4687]: E0314 08:59:45.736883 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.736917 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:45 crc kubenswrapper[4687]: E0314 08:59:45.737091 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:45 crc kubenswrapper[4687]: E0314 08:59:45.737196 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.748679 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.762576 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.772628 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.785547 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.795694 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.807402 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.816833 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: E0314 08:59:45.820175 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.832399 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.844207 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.859187 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.870964 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.883966 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.899461 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.913465 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.933822 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.944826 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.969162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.981623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:45 crc kubenswrapper[4687]: I0314 08:59:45.992709 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:45Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:46 crc kubenswrapper[4687]: I0314 08:59:46.736455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:46 crc kubenswrapper[4687]: E0314 08:59:46.736606 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:47 crc kubenswrapper[4687]: I0314 08:59:47.736592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:47 crc kubenswrapper[4687]: I0314 08:59:47.736607 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:47 crc kubenswrapper[4687]: I0314 08:59:47.736726 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:47 crc kubenswrapper[4687]: E0314 08:59:47.736868 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:47 crc kubenswrapper[4687]: E0314 08:59:47.736985 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:47 crc kubenswrapper[4687]: E0314 08:59:47.737428 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:47 crc kubenswrapper[4687]: I0314 08:59:47.737795 4687 scope.go:117] "RemoveContainer" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.386951 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/2.log" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.389987 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.390424 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.406067 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.417799 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.430125 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.442004 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.459304 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.478296 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.490616 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.502540 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.514554 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.524679 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.534596 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.543031 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.560262 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.570404 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.581203 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.593481 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.604975 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.621216 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.631630 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:48Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4687]: I0314 08:59:48.736488 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:48 crc kubenswrapper[4687]: E0314 08:59:48.736619 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.397567 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/3.log" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.398745 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/2.log" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.402420 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" exitCode=1 Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.402486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.402540 4687 scope.go:117] "RemoveContainer" containerID="d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.403650 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 08:59:49 crc kubenswrapper[4687]: E0314 08:59:49.403927 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.419078 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.431880 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.444187 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.458859 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.470189 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.488299 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.501698 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.511651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.525235 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.537458 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.559784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4621a59dae0bb6cb7a5638ddfdf106472e19f56a7fe3ad661664e8721051d0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"message\\\":\\\"rator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430695 7024 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0314 08:59:17.430702 7024 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0314 08:59:17.430688 7024 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:17.430710 7024 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0314 08:59:17.430716 7024 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0314 08:59:17.429461 7024 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:48Z\\\",\\\"message\\\":\\\"ent:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.474469 7368 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.475115 7368 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:48.475143 7368 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:48.475199 7368 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.570157 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.588044 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.599017 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.609361 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.618607 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.631578 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.641894 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.653902 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:49Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.737557 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.737614 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:49 crc kubenswrapper[4687]: I0314 08:59:49.737614 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:49 crc kubenswrapper[4687]: E0314 08:59:49.737713 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:49 crc kubenswrapper[4687]: E0314 08:59:49.737759 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:49 crc kubenswrapper[4687]: E0314 08:59:49.737829 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.407559 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/3.log" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.412233 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 08:59:50 crc kubenswrapper[4687]: E0314 08:59:50.412444 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.428701 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.444048 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.455760 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.474082 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.494365 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.508551 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.518545 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.530672 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.539760 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.551903 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.565097 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.576403 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.591955 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.604998 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.622026 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:48Z\\\",\\\"message\\\":\\\"ent:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.474469 7368 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.475115 7368 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:48.475143 7368 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:48.475199 7368 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.632671 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.654796 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.667543 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.680020 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:50Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:50 crc kubenswrapper[4687]: I0314 08:59:50.736790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:50 crc kubenswrapper[4687]: E0314 08:59:50.736956 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:50 crc kubenswrapper[4687]: E0314 08:59:50.821430 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:51 crc kubenswrapper[4687]: I0314 08:59:51.736870 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:51 crc kubenswrapper[4687]: I0314 08:59:51.736880 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:51 crc kubenswrapper[4687]: E0314 08:59:51.737008 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:51 crc kubenswrapper[4687]: E0314 08:59:51.737348 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:51 crc kubenswrapper[4687]: I0314 08:59:51.737261 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:51 crc kubenswrapper[4687]: E0314 08:59:51.737511 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:51 crc kubenswrapper[4687]: I0314 08:59:51.737785 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 08:59:51 crc kubenswrapper[4687]: E0314 08:59:51.738165 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:59:52 crc kubenswrapper[4687]: I0314 08:59:52.736693 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:52 crc kubenswrapper[4687]: E0314 08:59:52.736958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:53 crc kubenswrapper[4687]: I0314 08:59:53.735919 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:53 crc kubenswrapper[4687]: I0314 08:59:53.735980 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:53 crc kubenswrapper[4687]: E0314 08:59:53.736089 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:53 crc kubenswrapper[4687]: I0314 08:59:53.736167 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:53 crc kubenswrapper[4687]: E0314 08:59:53.736365 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:53 crc kubenswrapper[4687]: E0314 08:59:53.736424 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.707557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.707619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.707634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.707654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.707667 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:54Z","lastTransitionTime":"2026-03-14T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.758672 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.760190 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.762752 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:54Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.768034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.768084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.768098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.768114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.768125 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:54Z","lastTransitionTime":"2026-03-14T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.780502 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:54Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.783991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.784040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.784055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.784073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.784085 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:54Z","lastTransitionTime":"2026-03-14T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.796971 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:54Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.800304 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.800349 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.800358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.800372 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.800383 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:54Z","lastTransitionTime":"2026-03-14T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.811589 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:54Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.814643 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.814681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.814691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.814706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:54 crc kubenswrapper[4687]: I0314 08:59:54.814716 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:54Z","lastTransitionTime":"2026-03-14T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.824781 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:54Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:54 crc kubenswrapper[4687]: E0314 08:59:54.824937 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.736002 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:55 crc kubenswrapper[4687]: E0314 08:59:55.736390 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.736168 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:55 crc kubenswrapper[4687]: E0314 08:59:55.736477 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.736121 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:55 crc kubenswrapper[4687]: E0314 08:59:55.736550 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.748314 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.759190 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.772226 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.785961 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.805830 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:48Z\\\",\\\"message\\\":\\\"ent:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.474469 7368 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.475115 7368 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:48.475143 7368 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:48.475199 7368 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.816266 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.838666 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.851892 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.864245 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.874043 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.885515 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.893742 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.903979 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.913432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.924097 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.932792 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.942743 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.950429 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: I0314 08:59:55.960048 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:55Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:55 crc kubenswrapper[4687]: E0314 08:59:55.990463 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:56 crc kubenswrapper[4687]: I0314 08:59:56.736609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:56 crc kubenswrapper[4687]: E0314 08:59:56.736828 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.609536 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.609626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.609655 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.609815 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.609854 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:01.609823579 +0000 UTC m=+246.598063964 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.609903 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:01.609889261 +0000 UTC m=+246.598129726 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.609979 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.610021 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:01.610011785 +0000 UTC m=+246.598252230 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.710910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.711002 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711266 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711266 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711302 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711320 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711324 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711371 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711478 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:01.711441419 +0000 UTC m=+246.699681844 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.711522 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:01.71150183 +0000 UTC m=+246.699742255 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.736760 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.736781 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:57 crc kubenswrapper[4687]: I0314 08:59:57.736802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.737012 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.737133 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:57 crc kubenswrapper[4687]: E0314 08:59:57.737237 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:58 crc kubenswrapper[4687]: I0314 08:59:58.736285 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:58 crc kubenswrapper[4687]: E0314 08:59:58.736798 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 08:59:59 crc kubenswrapper[4687]: I0314 08:59:59.127304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 08:59:59 crc kubenswrapper[4687]: E0314 08:59:59.127560 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:59 crc kubenswrapper[4687]: E0314 08:59:59.127678 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs podName:4aae76c5-5354-43fd-8771-0114216bbf40 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:03.127653205 +0000 UTC m=+248.115893810 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs") pod "network-metrics-daemon-2xptn" (UID: "4aae76c5-5354-43fd-8771-0114216bbf40") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:59 crc kubenswrapper[4687]: I0314 08:59:59.736535 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:59 crc kubenswrapper[4687]: I0314 08:59:59.736775 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:59 crc kubenswrapper[4687]: E0314 08:59:59.736906 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:59 crc kubenswrapper[4687]: E0314 08:59:59.736772 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:59 crc kubenswrapper[4687]: I0314 08:59:59.736565 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:59 crc kubenswrapper[4687]: E0314 08:59:59.737631 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:00 crc kubenswrapper[4687]: I0314 09:00:00.736681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:00 crc kubenswrapper[4687]: E0314 09:00:00.736902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:00 crc kubenswrapper[4687]: E0314 09:00:00.992435 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:01 crc kubenswrapper[4687]: I0314 09:00:01.735902 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:01 crc kubenswrapper[4687]: I0314 09:00:01.736004 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:01 crc kubenswrapper[4687]: E0314 09:00:01.736040 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:01 crc kubenswrapper[4687]: I0314 09:00:01.736098 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:01 crc kubenswrapper[4687]: E0314 09:00:01.736190 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:01 crc kubenswrapper[4687]: E0314 09:00:01.736234 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:02 crc kubenswrapper[4687]: I0314 09:00:02.735823 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:02 crc kubenswrapper[4687]: E0314 09:00:02.735965 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:03 crc kubenswrapper[4687]: I0314 09:00:03.736408 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:03 crc kubenswrapper[4687]: I0314 09:00:03.736473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:03 crc kubenswrapper[4687]: I0314 09:00:03.736493 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:03 crc kubenswrapper[4687]: E0314 09:00:03.736984 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:03 crc kubenswrapper[4687]: I0314 09:00:03.737250 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:00:03 crc kubenswrapper[4687]: E0314 09:00:03.737240 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:03 crc kubenswrapper[4687]: E0314 09:00:03.737360 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:03 crc kubenswrapper[4687]: E0314 09:00:03.737434 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.735911 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:04 crc kubenswrapper[4687]: E0314 09:00:04.736145 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.992136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.992224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.992249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.992281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:04 crc kubenswrapper[4687]: I0314 09:00:04.992305 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:04Z","lastTransitionTime":"2026-03-14T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.007823 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.013932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.013994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.014005 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.014026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.014037 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:05Z","lastTransitionTime":"2026-03-14T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.025661 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.029985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.030012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.030022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.030037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.030046 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:05Z","lastTransitionTime":"2026-03-14T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.044312 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.048209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.048243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.048253 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.048269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.048282 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:05Z","lastTransitionTime":"2026-03-14T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.059969 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.063578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.063644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.063661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.063686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.063703 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:05Z","lastTransitionTime":"2026-03-14T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.079048 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"082ff9a8-763f-4c35-a8f4-a146ab033d00\\\",\\\"systemUUID\\\":\\\"9c5f1646-8f12-408a-97a5-53cd4c1286c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.079495 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.736498 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.736533 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.736725 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.736747 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.736881 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.737004 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.737480 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.737638 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.748057 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xjjs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732cd580-e685-4b88-b227-b113c4be4c55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:40Z\\\",\\\"message\\\":\\\"2026-03-14T08:58:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174\\\\n2026-03-14T08:58:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7af492c5-d85b-41f8-b208-69d504983174 to /host/opt/cni/bin/\\\\n2026-03-14T08:58:55Z [verbose] multus-daemon started\\\\n2026-03-14T08:58:55Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6xkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjjs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.758125 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.768651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f8f963d072a38b19e36ad7627ea834243416edc1b3f7d6b678b99d3ebd368a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.782068 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"091ec70d-b63b-49ac-aa9f-eb9937f8bd4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c132764ffec2836cdbd9380b613490804b90f18342495f4b0c7aa70fd1a6f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1efe4101c1aebcf11177fa2d8222c3c046ea634af3856449e3827f4b7b0f010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7398ab107ec8d2493310fefadce31ffa76345cfd6af83130bb85d35b4ff12df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aae7c02c84dbb4394c427edec5a2ea58260f74d7d8812ad1001c7e4d0e54307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72e2a6d033c6bbeac6a22f93bcdeecd23c0f2610fb3d236ff1fb2444c46d739b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3a38cdfb675180cc60ad0d1d0813befd9adffcbdba5949af07ec631b0f77e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d11cdfebfd7f264f92914021d674efb69bc30df4838b2b7f0fb4cd247fd940d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7qc4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.799042 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6719a1f-e970-49ec-85c4-df89934fe8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8d88710240c2fc4325fc2f82442db9a5512449d23b739bda7e5ed6df340810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec1685949dbb34d5dde6e71a4ea5985d6e596e96ca6b533618eef7dd7312099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lcbcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.813605 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ff72d2-1f06-49a9-a023-c792c80ad598\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0314 08:58:47.421963 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0314 08:58:47.422120 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:47.422769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1769652380/tls.crt::/tmp/serving-cert-1769652380/tls.key\\\\\\\"\\\\nI0314 08:58:48.351782 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0314 08:58:48.355547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0314 08:58:48.355566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0314 08:58:48.355588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0314 08:58:48.355593 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0314 08:58:48.362106 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0314 08:58:48.362129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0314 08:58:48.362138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0314 08:58:48.362141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0314 08:58:48.362144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0314 08:58:48.362146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0314 08:58:48.362227 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0314 08:58:48.364678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.833798 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00464e6c2218cf36278a73d5421f075e7483ca2b3a7c4616c840d08c3fd4a533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c2bb13b06133c94592db42a46cc258606d2afe0bb84697ddde385b41dc906f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.845889 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.858199 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zvbgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1129f889-aeae-45bf-bbdf-e48da879821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb3a46c8e769cbb50717e8e8eafe56de90cbf9b2a8dc7f37a0b9aee5b6102976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qd29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zvbgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.869227 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c28f39ed-17ae-4d24-9fa5-cea877046b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a0a45fb794eafc31b3f60a3f180cfbe556d70d9451b14785cb6b41c2ff71e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s5gw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.878112 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bj9jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b9a325-6445-4634-88e1-3a617c091991\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acfd02904f3d3b01de840778ea3b8e02d95c9d18a95d5420564fbd53d49c55cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dskww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bj9jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.889900 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a140707-2c73-4567-85db-0c0d1a4fe1e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd1220a70fb6edfccd44744e2eb8c17d63f32e0a36ed6278497db000eca095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f626e01c69734902d62e9404251be8a131347135956347a212a79509db559a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:57:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:22.795849 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:22.797662 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:22.800000 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:22.800683 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:57:52.427170 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:57:52.427253 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09064dfaded00355c78e5f91ac7d47b373724782295f9fe02b7cebc586dcb292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867b49cce99191045a7c0f56cc03a2170951ec1c3d4b7bc24f45b4a11aa6d7a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.900391 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42fabe3c-255e-4f18-a3fc-39775c0e8a89\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b72c3f210b560e0ac2f3cebb5f91d332c53a763f2754a7937fd1df606a187bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b92d8f5cc55fdc78170c34440bb5fa3491235807114180c49a26dfb9dbdee2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3e6cdca1b0bd84ac3b9a0052a64dd10e85430cbcefc9f3863005e72d4600c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa83d33ac108496d6b6c5a9d1165cf219903c85615ba96307d395afe409c0b20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.914045 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f39f0533-8cd2-471e-be05-0822331be73b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f0875708bfda752262608b519c4d7a12e46b21685ff7a8d5b9398e68e1c05cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://669a5ff1ccd9c74fbd2e6ac6452609ea4f61c4992df53af4c3994bd88f1a6987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.924804 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.936999 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02afba1f26c25130aca09a5366fbd7e001577192fbfb6636ca01eae06eec12f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.953578 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a910c6-8772-4fc8-b557-8ca75235f11c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:48Z\\\",\\\"message\\\":\\\"ent:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.474469 7368 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0314 08:59:48.475115 7368 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:48.475143 7368 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:48.475199 7368 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:58:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62c4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jkcr7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.962913 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aae76c5-5354-43fd-8771-0114216bbf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bvzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: I0314 09:00:05.980858 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec1e67b-c209-432f-8146-4d5e81640640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c72551cdccb65a40dbfcc63446fdc0759094eb06dadce8a13ea7668707f524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0d36963cbbc2ef03c7d4a68c15cb3c32fdd8bacb7d6822d52ae4ac2391dc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc9e3bb074e88fe0f15f117edf91d67484d8f21674b2b33ac03a318c2d08c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9363fa1ac55d90bf91f7c5b024b707389dc8a928fbd6678e182345d0d34f7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb0043d0fa68d8f70cc81696ccbb2739d7fe57509ab772fee1cb9d3ee1e857e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0218855fb4fbe9dd82cb30812cd8378dc5c42e9c58fe17b4f235f28040bb4306\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95ec586d75b83ba63294d156f13b3df32b61819ff810a58546bc2e3303a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c70c088e8653f6be890c4cf2d81d765bbf7096ec074ed966e2a913c494c39ddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:56:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4687]: E0314 09:00:05.992757 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:06 crc kubenswrapper[4687]: I0314 09:00:06.736794 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:06 crc kubenswrapper[4687]: E0314 09:00:06.737058 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:07 crc kubenswrapper[4687]: I0314 09:00:07.736524 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:07 crc kubenswrapper[4687]: I0314 09:00:07.736558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:07 crc kubenswrapper[4687]: I0314 09:00:07.736619 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:07 crc kubenswrapper[4687]: E0314 09:00:07.736713 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:07 crc kubenswrapper[4687]: E0314 09:00:07.736833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:07 crc kubenswrapper[4687]: E0314 09:00:07.736912 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:08 crc kubenswrapper[4687]: I0314 09:00:08.736442 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:08 crc kubenswrapper[4687]: E0314 09:00:08.736637 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:09 crc kubenswrapper[4687]: I0314 09:00:09.737147 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:09 crc kubenswrapper[4687]: I0314 09:00:09.737292 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:09 crc kubenswrapper[4687]: I0314 09:00:09.737426 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:09 crc kubenswrapper[4687]: E0314 09:00:09.737418 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:09 crc kubenswrapper[4687]: E0314 09:00:09.737571 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:09 crc kubenswrapper[4687]: E0314 09:00:09.737677 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:10 crc kubenswrapper[4687]: I0314 09:00:10.736728 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:10 crc kubenswrapper[4687]: E0314 09:00:10.737097 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:10 crc kubenswrapper[4687]: E0314 09:00:10.994376 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:11 crc kubenswrapper[4687]: I0314 09:00:11.735888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:11 crc kubenswrapper[4687]: I0314 09:00:11.735930 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:11 crc kubenswrapper[4687]: I0314 09:00:11.735899 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:11 crc kubenswrapper[4687]: E0314 09:00:11.736021 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:11 crc kubenswrapper[4687]: E0314 09:00:11.736130 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:11 crc kubenswrapper[4687]: E0314 09:00:11.736214 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:12 crc kubenswrapper[4687]: I0314 09:00:12.735812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:12 crc kubenswrapper[4687]: E0314 09:00:12.735938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:13 crc kubenswrapper[4687]: I0314 09:00:13.736641 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:13 crc kubenswrapper[4687]: I0314 09:00:13.736697 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:13 crc kubenswrapper[4687]: I0314 09:00:13.736645 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:13 crc kubenswrapper[4687]: E0314 09:00:13.736762 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:13 crc kubenswrapper[4687]: E0314 09:00:13.736873 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:13 crc kubenswrapper[4687]: E0314 09:00:13.736938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:14 crc kubenswrapper[4687]: I0314 09:00:14.736614 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:14 crc kubenswrapper[4687]: E0314 09:00:14.736766 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.365412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.365453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.365464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.365480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.365492 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:15Z","lastTransitionTime":"2026-03-14T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.453126 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr"] Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.453603 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.456548 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.456642 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.456882 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.456931 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.512056 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7qc4m" podStartSLOduration=128.512038466 podStartE2EDuration="2m8.512038466s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.511046003 +0000 UTC m=+200.499286378" watchObservedRunningTime="2026-03-14 09:00:15.512038466 +0000 UTC m=+200.500278841" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.522073 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lcbcd" podStartSLOduration=128.522052995 podStartE2EDuration="2m8.522052995s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.521975882 +0000 UTC m=+200.510216257" watchObservedRunningTime="2026-03-14 09:00:15.522052995 +0000 UTC m=+200.510293370" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.533938 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podStartSLOduration=128.533919659 podStartE2EDuration="2m8.533919659s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.533492088 +0000 UTC m=+200.521732483" watchObservedRunningTime="2026-03-14 09:00:15.533919659 +0000 UTC m=+200.522160034" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.561665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=53.561648295 podStartE2EDuration="53.561648295s" podCreationTimestamp="2026-03-14 08:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.561632975 +0000 UTC m=+200.549873350" watchObservedRunningTime="2026-03-14 09:00:15.561648295 +0000 UTC m=+200.549888670" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.561780 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bj9jt" podStartSLOduration=128.561776348 podStartE2EDuration="2m8.561776348s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.548669944 +0000 UTC m=+200.536910319" watchObservedRunningTime="2026-03-14 09:00:15.561776348 +0000 UTC m=+200.550016723" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.594502 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.594594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.594717 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fce235-e413-4c75-8f20-43b5628000a2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.594743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78fce235-e413-4c75-8f20-43b5628000a2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.594772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78fce235-e413-4c75-8f20-43b5628000a2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.609482 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zvbgm" podStartSLOduration=128.609459519 podStartE2EDuration="2m8.609459519s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.598090458 +0000 UTC m=+200.586330833" watchObservedRunningTime="2026-03-14 09:00:15.609459519 +0000 UTC m=+200.597699904" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.676243 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=53.676224932 podStartE2EDuration="53.676224932s" podCreationTimestamp="2026-03-14 08:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.674390026 +0000 UTC m=+200.662630401" watchObservedRunningTime="2026-03-14 09:00:15.676224932 +0000 UTC m=+200.664465307" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.686504 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=76.686487606 podStartE2EDuration="1m16.686487606s" podCreationTimestamp="2026-03-14 08:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.686385724 +0000 UTC m=+200.674626119" watchObservedRunningTime="2026-03-14 09:00:15.686487606 +0000 UTC m=+200.674727981" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78fce235-e413-4c75-8f20-43b5628000a2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695325 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fce235-e413-4c75-8f20-43b5628000a2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78fce235-e413-4c75-8f20-43b5628000a2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695602 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.695746 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/78fce235-e413-4c75-8f20-43b5628000a2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.696213 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/78fce235-e413-4c75-8f20-43b5628000a2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.700362 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=50.700347199 podStartE2EDuration="50.700347199s" podCreationTimestamp="2026-03-14 08:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.700276668 +0000 UTC m=+200.688517063" watchObservedRunningTime="2026-03-14 09:00:15.700347199 +0000 UTC m=+200.688587574" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.700979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fce235-e413-4c75-8f20-43b5628000a2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.711979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78fce235-e413-4c75-8f20-43b5628000a2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjtbr\" (UID: \"78fce235-e413-4c75-8f20-43b5628000a2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.738172 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:15 crc kubenswrapper[4687]: E0314 09:00:15.738271 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.738463 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:15 crc kubenswrapper[4687]: E0314 09:00:15.738509 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.738467 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:15 crc kubenswrapper[4687]: E0314 09:00:15.738589 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.743572 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xjjs4" podStartSLOduration=128.743558479 podStartE2EDuration="2m8.743558479s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:15.742497933 +0000 UTC m=+200.730738308" watchObservedRunningTime="2026-03-14 09:00:15.743558479 +0000 UTC m=+200.731798854" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.766058 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.787148 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 14 09:00:15 crc kubenswrapper[4687]: I0314 09:00:15.795446 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 09:00:15 crc kubenswrapper[4687]: E0314 09:00:15.994896 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:16 crc kubenswrapper[4687]: I0314 09:00:16.500863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" event={"ID":"78fce235-e413-4c75-8f20-43b5628000a2","Type":"ContainerStarted","Data":"c72bdae3a0563728b8703cb6a095f4f95ba057a36f9fb3a00223d3dc0fc5808f"} Mar 14 09:00:16 crc kubenswrapper[4687]: I0314 09:00:16.500948 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" event={"ID":"78fce235-e413-4c75-8f20-43b5628000a2","Type":"ContainerStarted","Data":"b3eaec60abb9f2d3cc3251fdde45fa09555852abc0b2f4966ae24306e4a6ec92"} Mar 14 09:00:16 crc kubenswrapper[4687]: I0314 09:00:16.517998 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjtbr" podStartSLOduration=129.517979184 podStartE2EDuration="2m9.517979184s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:16.51663808 +0000 UTC m=+201.504878495" watchObservedRunningTime="2026-03-14 09:00:16.517979184 +0000 UTC m=+201.506219569" Mar 14 09:00:16 crc kubenswrapper[4687]: I0314 09:00:16.736183 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:16 crc kubenswrapper[4687]: E0314 09:00:16.736732 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:16 crc kubenswrapper[4687]: I0314 09:00:16.736921 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:00:16 crc kubenswrapper[4687]: E0314 09:00:16.737049 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jkcr7_openshift-ovn-kubernetes(f7a910c6-8772-4fc8-b557-8ca75235f11c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" Mar 14 09:00:17 crc kubenswrapper[4687]: I0314 09:00:17.736176 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:17 crc kubenswrapper[4687]: I0314 09:00:17.736227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:17 crc kubenswrapper[4687]: I0314 09:00:17.736196 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:17 crc kubenswrapper[4687]: E0314 09:00:17.736323 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:17 crc kubenswrapper[4687]: E0314 09:00:17.736429 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:17 crc kubenswrapper[4687]: E0314 09:00:17.736508 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:18 crc kubenswrapper[4687]: I0314 09:00:18.735998 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:18 crc kubenswrapper[4687]: E0314 09:00:18.736474 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:18 crc kubenswrapper[4687]: I0314 09:00:18.736579 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.509605 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/4.log" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.510729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39509f276273ece1deeff2f96ae4e69e1047642c297c704a9b637602df040bc1"} Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.511014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.529525 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.529486958 podStartE2EDuration="1m15.529486958s" podCreationTimestamp="2026-03-14 08:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:19.528516303 +0000 UTC m=+204.516756678" watchObservedRunningTime="2026-03-14 09:00:19.529486958 +0000 UTC m=+204.517727353" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.736756 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.736782 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:19 crc kubenswrapper[4687]: I0314 09:00:19.736796 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:19 crc kubenswrapper[4687]: E0314 09:00:19.736896 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:19 crc kubenswrapper[4687]: E0314 09:00:19.736990 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:19 crc kubenswrapper[4687]: E0314 09:00:19.737087 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:20 crc kubenswrapper[4687]: I0314 09:00:20.736744 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:20 crc kubenswrapper[4687]: E0314 09:00:20.736903 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:20 crc kubenswrapper[4687]: E0314 09:00:20.996728 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:21 crc kubenswrapper[4687]: I0314 09:00:21.736587 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:21 crc kubenswrapper[4687]: I0314 09:00:21.736625 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:21 crc kubenswrapper[4687]: I0314 09:00:21.736600 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:21 crc kubenswrapper[4687]: E0314 09:00:21.736776 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:21 crc kubenswrapper[4687]: E0314 09:00:21.736834 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:21 crc kubenswrapper[4687]: E0314 09:00:21.736911 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:22 crc kubenswrapper[4687]: I0314 09:00:22.736873 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:22 crc kubenswrapper[4687]: E0314 09:00:22.737043 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:23 crc kubenswrapper[4687]: I0314 09:00:23.736214 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:23 crc kubenswrapper[4687]: I0314 09:00:23.736303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:23 crc kubenswrapper[4687]: E0314 09:00:23.736962 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:23 crc kubenswrapper[4687]: E0314 09:00:23.737055 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:23 crc kubenswrapper[4687]: I0314 09:00:23.736432 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:23 crc kubenswrapper[4687]: E0314 09:00:23.737262 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:24 crc kubenswrapper[4687]: I0314 09:00:24.736121 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:24 crc kubenswrapper[4687]: E0314 09:00:24.736441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:25 crc kubenswrapper[4687]: I0314 09:00:25.736567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:25 crc kubenswrapper[4687]: I0314 09:00:25.736567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:25 crc kubenswrapper[4687]: E0314 09:00:25.738573 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:25 crc kubenswrapper[4687]: I0314 09:00:25.738650 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:25 crc kubenswrapper[4687]: E0314 09:00:25.738876 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:25 crc kubenswrapper[4687]: E0314 09:00:25.739078 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:25 crc kubenswrapper[4687]: E0314 09:00:25.997953 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:26 crc kubenswrapper[4687]: I0314 09:00:26.736713 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:26 crc kubenswrapper[4687]: E0314 09:00:26.737015 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.538369 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/1.log" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.539251 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/0.log" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.539387 4687 generic.go:334] "Generic (PLEG): container finished" podID="732cd580-e685-4b88-b227-b113c4be4c55" containerID="f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e" exitCode=1 Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.539446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerDied","Data":"f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e"} Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.539505 4687 scope.go:117] "RemoveContainer" containerID="584c72951217611fb9b351c3c4a2a82045f572eabc33b614dfce5bbb855f33ff" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.540074 4687 scope.go:117] "RemoveContainer" containerID="f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e" Mar 14 09:00:27 crc kubenswrapper[4687]: E0314 09:00:27.540459 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xjjs4_openshift-multus(732cd580-e685-4b88-b227-b113c4be4c55)\"" pod="openshift-multus/multus-xjjs4" podUID="732cd580-e685-4b88-b227-b113c4be4c55" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.735752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.735816 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:27 crc kubenswrapper[4687]: I0314 09:00:27.735773 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:27 crc kubenswrapper[4687]: E0314 09:00:27.735890 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:27 crc kubenswrapper[4687]: E0314 09:00:27.735970 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:27 crc kubenswrapper[4687]: E0314 09:00:27.736047 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:28 crc kubenswrapper[4687]: I0314 09:00:28.543926 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/1.log" Mar 14 09:00:28 crc kubenswrapper[4687]: I0314 09:00:28.736093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:28 crc kubenswrapper[4687]: E0314 09:00:28.736222 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:29 crc kubenswrapper[4687]: I0314 09:00:29.736653 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:29 crc kubenswrapper[4687]: E0314 09:00:29.736781 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:29 crc kubenswrapper[4687]: I0314 09:00:29.736832 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:29 crc kubenswrapper[4687]: E0314 09:00:29.736892 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:29 crc kubenswrapper[4687]: I0314 09:00:29.736653 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:29 crc kubenswrapper[4687]: E0314 09:00:29.736945 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:30 crc kubenswrapper[4687]: I0314 09:00:30.735872 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:30 crc kubenswrapper[4687]: E0314 09:00:30.736039 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:30 crc kubenswrapper[4687]: I0314 09:00:30.737256 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:00:30 crc kubenswrapper[4687]: E0314 09:00:30.999524 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.128381 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.471606 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xptn"] Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.554617 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/3.log" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.557241 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerStarted","Data":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.557294 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:31 crc kubenswrapper[4687]: E0314 09:00:31.558060 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.598615 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podStartSLOduration=144.598599771 podStartE2EDuration="2m24.598599771s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:31.597890593 +0000 UTC m=+216.586130968" watchObservedRunningTime="2026-03-14 09:00:31.598599771 +0000 UTC m=+216.586840146" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.736538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:31 crc kubenswrapper[4687]: E0314 09:00:31.736746 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.737050 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:31 crc kubenswrapper[4687]: E0314 09:00:31.737128 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:31 crc kubenswrapper[4687]: I0314 09:00:31.737388 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:31 crc kubenswrapper[4687]: E0314 09:00:31.737479 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:33 crc kubenswrapper[4687]: I0314 09:00:33.736520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:33 crc kubenswrapper[4687]: I0314 09:00:33.736591 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:33 crc kubenswrapper[4687]: I0314 09:00:33.736595 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:33 crc kubenswrapper[4687]: E0314 09:00:33.736644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:33 crc kubenswrapper[4687]: E0314 09:00:33.736739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:33 crc kubenswrapper[4687]: I0314 09:00:33.736768 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:33 crc kubenswrapper[4687]: E0314 09:00:33.736817 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:33 crc kubenswrapper[4687]: E0314 09:00:33.736904 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:35 crc kubenswrapper[4687]: I0314 09:00:35.736143 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:35 crc kubenswrapper[4687]: I0314 09:00:35.736189 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:35 crc kubenswrapper[4687]: I0314 09:00:35.737310 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:35 crc kubenswrapper[4687]: E0314 09:00:35.737309 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:35 crc kubenswrapper[4687]: E0314 09:00:35.737455 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:35 crc kubenswrapper[4687]: I0314 09:00:35.737498 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:35 crc kubenswrapper[4687]: E0314 09:00:35.737561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:35 crc kubenswrapper[4687]: E0314 09:00:35.737681 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:36 crc kubenswrapper[4687]: E0314 09:00:36.000028 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:37 crc kubenswrapper[4687]: I0314 09:00:37.736200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:37 crc kubenswrapper[4687]: I0314 09:00:37.736216 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:37 crc kubenswrapper[4687]: I0314 09:00:37.736265 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:37 crc kubenswrapper[4687]: I0314 09:00:37.736284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:37 crc kubenswrapper[4687]: E0314 09:00:37.736491 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:37 crc kubenswrapper[4687]: E0314 09:00:37.736543 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:37 crc kubenswrapper[4687]: E0314 09:00:37.736591 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:37 crc kubenswrapper[4687]: E0314 09:00:37.736437 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:39 crc kubenswrapper[4687]: I0314 09:00:39.736129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:39 crc kubenswrapper[4687]: E0314 09:00:39.736251 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:39 crc kubenswrapper[4687]: I0314 09:00:39.736307 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:39 crc kubenswrapper[4687]: E0314 09:00:39.736371 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:39 crc kubenswrapper[4687]: I0314 09:00:39.736594 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:39 crc kubenswrapper[4687]: I0314 09:00:39.736626 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:39 crc kubenswrapper[4687]: E0314 09:00:39.736651 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:39 crc kubenswrapper[4687]: E0314 09:00:39.736751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:41 crc kubenswrapper[4687]: E0314 09:00:41.002096 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:41 crc kubenswrapper[4687]: I0314 09:00:41.736458 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:41 crc kubenswrapper[4687]: I0314 09:00:41.736743 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:41 crc kubenswrapper[4687]: E0314 09:00:41.736843 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:41 crc kubenswrapper[4687]: I0314 09:00:41.736638 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:41 crc kubenswrapper[4687]: E0314 09:00:41.736991 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:41 crc kubenswrapper[4687]: E0314 09:00:41.737137 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:41 crc kubenswrapper[4687]: I0314 09:00:41.736670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:41 crc kubenswrapper[4687]: E0314 09:00:41.737380 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:42 crc kubenswrapper[4687]: I0314 09:00:42.736714 4687 scope.go:117] "RemoveContainer" containerID="f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e" Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.592879 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/1.log" Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.592968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerStarted","Data":"edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0"} Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.735912 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.736007 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:43 crc kubenswrapper[4687]: E0314 09:00:43.736115 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.736229 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:43 crc kubenswrapper[4687]: I0314 09:00:43.736286 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:43 crc kubenswrapper[4687]: E0314 09:00:43.736383 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:43 crc kubenswrapper[4687]: E0314 09:00:43.736524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:43 crc kubenswrapper[4687]: E0314 09:00:43.736727 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:45 crc kubenswrapper[4687]: I0314 09:00:45.735960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:45 crc kubenswrapper[4687]: I0314 09:00:45.735960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:45 crc kubenswrapper[4687]: I0314 09:00:45.736055 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:45 crc kubenswrapper[4687]: I0314 09:00:45.736056 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:45 crc kubenswrapper[4687]: E0314 09:00:45.737268 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xptn" podUID="4aae76c5-5354-43fd-8771-0114216bbf40" Mar 14 09:00:45 crc kubenswrapper[4687]: E0314 09:00:45.737456 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:45 crc kubenswrapper[4687]: E0314 09:00:45.737550 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:45 crc kubenswrapper[4687]: E0314 09:00:45.737501 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.105411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.155214 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hnql"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.155775 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.156828 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.157578 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.158793 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.159008 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.159249 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.160062 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.160270 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.160532 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.160562 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.160842 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lkkvm"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.161118 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.161437 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.162171 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.162753 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.163424 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.163767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.171027 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.171662 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.181837 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.182157 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.184658 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r47bq"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.185239 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.186066 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.186248 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.187023 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188282 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02925596-31b2-4a1c-a387-4244cbf714dc-serving-cert\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188536 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-dir\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be1042b-d88e-402e-9c1d-2b258aa67d9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-trusted-ca\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188781 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188851 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-policies\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpv9h\" (UniqueName: \"kubernetes.io/projected/d5694044-0b34-45e7-ab8d-a140eaf37b70-kube-api-access-vpv9h\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.188989 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-images\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189125 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189256 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrm9\" (UniqueName: \"kubernetes.io/projected/9104920c-a1ce-4042-aa76-35aca642996c-kube-api-access-vdrm9\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be1042b-d88e-402e-9c1d-2b258aa67d9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a6bc76d-4c58-46bc-82f2-c25f27871f31-machine-approver-tls\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnbf\" (UniqueName: \"kubernetes.io/projected/6be1042b-d88e-402e-9c1d-2b258aa67d9f-kube-api-access-mfnbf\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189637 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-config\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkcv\" (UniqueName: \"kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm4t\" (UniqueName: \"kubernetes.io/projected/1a6bc76d-4c58-46bc-82f2-c25f27871f31-kube-api-access-2jm4t\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-serving-cert\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.189989 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-encryption-config\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190122 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-config\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8lc\" (UniqueName: \"kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190361 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190441 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwls\" (UniqueName: \"kubernetes.io/projected/02925596-31b2-4a1c-a387-4244cbf714dc-kube-api-access-dqwls\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190564 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5694044-0b34-45e7-ab8d-a140eaf37b70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-auth-proxy-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k7p\" (UniqueName: \"kubernetes.io/projected/d005bc2d-58d9-4ca1-8fda-935a6569e953-kube-api-access-98k7p\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190780 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-client\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.190846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9104920c-a1ce-4042-aa76-35aca642996c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.191396 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.191682 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.191936 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.192128 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.192349 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.192865 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.193161 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-khk5g"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.193709 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.193961 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.194178 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.194472 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.194724 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.194905 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.195189 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.195420 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.206476 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.210468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.210484 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211026 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211307 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211348 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211531 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211552 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211700 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.211832 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.212021 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.212228 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.212480 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.212943 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.219938 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.221760 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.223547 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224293 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224349 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224492 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224681 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.224861 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225592 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225697 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225811 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225841 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.225951 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.226058 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228381 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228470 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228707 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228832 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228938 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.228433 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.229077 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.229212 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.230642 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.230812 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.230948 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.235860 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.236184 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.236372 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7578x"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.236759 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.236896 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.239177 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.240616 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.240735 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.242353 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9q7lp"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.243024 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.243244 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.243546 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zzmgp"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.243919 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.245556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-brwhw"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.245986 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.254256 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bbmr4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.254790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.255105 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.255912 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.256317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.256770 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.257256 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.258063 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.258463 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v4phn"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.258663 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.259301 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.259507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.259727 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.259821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.260262 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.260310 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.271038 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.272168 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.273059 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.275527 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.276612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.282363 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.286533 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.286580 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.286534 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.286840 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.286875 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.306597 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.306633 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.306769 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.307252 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.307760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.307870 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308010 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308278 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308440 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308622 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308715 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308818 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.308923 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.309010 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.309117 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.309385 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.309598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.312043 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a6bc76d-4c58-46bc-82f2-c25f27871f31-machine-approver-tls\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.312428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnbf\" (UniqueName: \"kubernetes.io/projected/6be1042b-d88e-402e-9c1d-2b258aa67d9f-kube-api-access-mfnbf\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.312735 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.313870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.313912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-config\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.313987 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkcv\" (UniqueName: \"kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314539 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-config\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jm4t\" (UniqueName: \"kubernetes.io/projected/1a6bc76d-4c58-46bc-82f2-c25f27871f31-kube-api-access-2jm4t\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314589 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-serving-cert\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314712 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314902 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.314992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-encryption-config\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-config\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8lc\" (UniqueName: \"kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315538 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqwls\" (UniqueName: \"kubernetes.io/projected/02925596-31b2-4a1c-a387-4244cbf714dc-kube-api-access-dqwls\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315622 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5694044-0b34-45e7-ab8d-a140eaf37b70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-auth-proxy-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315920 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k7p\" (UniqueName: \"kubernetes.io/projected/d005bc2d-58d9-4ca1-8fda-935a6569e953-kube-api-access-98k7p\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-client\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316080 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9104920c-a1ce-4042-aa76-35aca642996c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02925596-31b2-4a1c-a387-4244cbf714dc-serving-cert\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316417 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-dir\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317038 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316423 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-dir\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be1042b-d88e-402e-9c1d-2b258aa67d9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-trusted-ca\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-policies\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317454 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpv9h\" (UniqueName: \"kubernetes.io/projected/d5694044-0b34-45e7-ab8d-a140eaf37b70-kube-api-access-vpv9h\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317478 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-images\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317501 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317552 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrm9\" (UniqueName: \"kubernetes.io/projected/9104920c-a1ce-4042-aa76-35aca642996c-kube-api-access-vdrm9\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be1042b-d88e-402e-9c1d-2b258aa67d9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317901 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.318183 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-auth-proxy-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.318253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be1042b-d88e-402e-9c1d-2b258aa67d9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.317046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-config\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.315837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.321182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5694044-0b34-45e7-ab8d-a140eaf37b70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.321247 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-serving-cert\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.316173 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.322645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5694044-0b34-45e7-ab8d-a140eaf37b70-images\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.322923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bc76d-4c58-46bc-82f2-c25f27871f31-config\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.323549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-audit-policies\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.324255 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.324588 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.324729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.324768 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.325388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-etcd-client\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.325551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.326890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9104920c-a1ce-4042-aa76-35aca642996c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.327269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d005bc2d-58d9-4ca1-8fda-935a6569e953-encryption-config\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.327944 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.328628 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be1042b-d88e-402e-9c1d-2b258aa67d9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.331054 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.331832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02925596-31b2-4a1c-a387-4244cbf714dc-trusted-ca\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.332887 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a6bc76d-4c58-46bc-82f2-c25f27871f31-machine-approver-tls\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.335318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02925596-31b2-4a1c-a387-4244cbf714dc-serving-cert\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.335414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.336120 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.336700 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.337208 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.337815 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.337971 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.338150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.338426 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.339619 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-t7bl7"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.340086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.341017 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.341676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.343317 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.343989 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.344001 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.345595 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.347856 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.348412 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.348802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.352411 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.354319 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.354850 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.356736 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.359207 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.359822 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hw27r"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.360226 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.360621 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.360927 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.360936 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.361509 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.365512 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557980-mm8hd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.365622 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.369278 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.369569 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.370801 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-72dmh"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.371427 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.382730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-khk5g"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.382786 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qmmnh"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.382959 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.384266 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.384294 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lkkvm"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.384311 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v4phn"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.385237 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.386616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.387163 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.387600 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.389060 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.390388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9q7lp"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.391148 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.392178 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.394176 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.395231 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.396331 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.397756 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.397957 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.399350 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.400946 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.402445 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.403648 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zzmgp"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.404766 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.405879 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7578x"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.407132 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zkkk7"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.407858 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.408369 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.409568 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.410739 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bbmr4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.411823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.413156 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.414740 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-brwhw"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.415892 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.417090 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pdncg"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.417743 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418135 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-service-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418162 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit-dir\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418321 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a5910cc-d61b-4384-a9a1-49104c3f337f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418742 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-service-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-serving-cert\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418900 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-encryption-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.418981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjm4\" (UniqueName: \"kubernetes.io/projected/a0893729-52eb-4339-83bb-e7ec8ba388b7-kube-api-access-5sjm4\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-client\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53609d3c-0f7c-4413-898c-8d4e6f47db22-metrics-tls\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419393 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzr2\" (UniqueName: \"kubernetes.io/projected/6b74e63b-7771-4f32-9fca-0e112597d97e-kube-api-access-5fzr2\") pod \"downloads-7954f5f757-7578x\" (UID: \"6b74e63b-7771-4f32-9fca-0e112597d97e\") " pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8sg\" (UniqueName: \"kubernetes.io/projected/b37ac934-0137-4459-87aa-ade97e608134-kube-api-access-xs8sg\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-node-pullsecrets\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-image-import-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.419959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mptk\" (UniqueName: \"kubernetes.io/projected/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-kube-api-access-2mptk\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420163 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37ac934-0137-4459-87aa-ade97e608134-serving-cert\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420291 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-client\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420387 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420471 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh72q\" (UniqueName: \"kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420542 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-config\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420627 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-config\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420720 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlknm\" (UniqueName: \"kubernetes.io/projected/53609d3c-0f7c-4413-898c-8d4e6f47db22-kube-api-access-wlknm\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420826 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.420963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v458v\" (UniqueName: \"kubernetes.io/projected/3a5910cc-d61b-4384-a9a1-49104c3f337f-kube-api-access-v458v\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.421108 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.421213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-serving-cert\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.421351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.421923 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.423254 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qmmnh"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.423428 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.423677 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72dmh"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.422061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.424248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.426164 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.427361 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hw27r"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.428628 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-mm8hd"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.429692 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r47bq"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.430681 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hnql"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.431722 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.432717 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pdncg"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.433900 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz"] Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.438610 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.457960 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.477509 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.498120 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.517315 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523189 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-node-pullsecrets\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-image-import-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mptk\" (UniqueName: \"kubernetes.io/projected/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-kube-api-access-2mptk\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523325 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523370 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37ac934-0137-4459-87aa-ade97e608134-serving-cert\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523501 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-client\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh72q\" (UniqueName: \"kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-config\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523650 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-config\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523683 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlknm\" (UniqueName: \"kubernetes.io/projected/53609d3c-0f7c-4413-898c-8d4e6f47db22-kube-api-access-wlknm\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523712 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.523772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v458v\" (UniqueName: \"kubernetes.io/projected/3a5910cc-d61b-4384-a9a1-49104c3f337f-kube-api-access-v458v\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.524441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-node-pullsecrets\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525021 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-image-import-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525087 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-config\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-serving-cert\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-config\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.525848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526258 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526587 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit-dir\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-service-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526793 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a5910cc-d61b-4384-a9a1-49104c3f337f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526920 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit-dir\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527208 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37ac934-0137-4459-87aa-ade97e608134-serving-cert\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.526888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-service-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527343 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-encryption-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-serving-cert\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527373 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-service-ca\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjm4\" (UniqueName: \"kubernetes.io/projected/a0893729-52eb-4339-83bb-e7ec8ba388b7-kube-api-access-5sjm4\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-client\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527475 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53609d3c-0f7c-4413-898c-8d4e6f47db22-metrics-tls\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527546 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fzr2\" (UniqueName: \"kubernetes.io/projected/6b74e63b-7771-4f32-9fca-0e112597d97e-kube-api-access-5fzr2\") pod \"downloads-7954f5f757-7578x\" (UID: \"6b74e63b-7771-4f32-9fca-0e112597d97e\") " pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8sg\" (UniqueName: \"kubernetes.io/projected/b37ac934-0137-4459-87aa-ade97e608134-kube-api-access-xs8sg\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.527813 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.528143 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.528518 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-audit\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.530955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531185 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-serving-cert\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b37ac934-0137-4459-87aa-ade97e608134-service-ca-bundle\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531304 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-etcd-client\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531449 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-encryption-config\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-etcd-client\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.531982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3a5910cc-d61b-4384-a9a1-49104c3f337f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.532023 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0893729-52eb-4339-83bb-e7ec8ba388b7-serving-cert\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.532457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.534138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53609d3c-0f7c-4413-898c-8d4e6f47db22-metrics-tls\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.537856 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.557957 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.578483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.598253 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.618123 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.638136 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.658017 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.678624 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.698699 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.731904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnbf\" (UniqueName: \"kubernetes.io/projected/6be1042b-d88e-402e-9c1d-2b258aa67d9f-kube-api-access-mfnbf\") pod \"openshift-apiserver-operator-796bbdcf4f-24tmc\" (UID: \"6be1042b-d88e-402e-9c1d-2b258aa67d9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.777668 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jm4t\" (UniqueName: \"kubernetes.io/projected/1a6bc76d-4c58-46bc-82f2-c25f27871f31-kube-api-access-2jm4t\") pod \"machine-approver-56656f9798-kp5dx\" (UID: \"1a6bc76d-4c58-46bc-82f2-c25f27871f31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.791267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkcv\" (UniqueName: \"kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv\") pod \"controller-manager-879f6c89f-msfwg\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.809325 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.813059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8lc\" (UniqueName: \"kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc\") pod \"route-controller-manager-6576b87f9c-bnmqd\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: W0314 09:00:46.826364 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a6bc76d_4c58_46bc_82f2_c25f27871f31.slice/crio-c30b622c91746cba939a0daa8348178b656f0907a6a413e9b8f6c67a358992b7 WatchSource:0}: Error finding container c30b622c91746cba939a0daa8348178b656f0907a6a413e9b8f6c67a358992b7: Status 404 returned error can't find the container with id c30b622c91746cba939a0daa8348178b656f0907a6a413e9b8f6c67a358992b7 Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.839287 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.879960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.881046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k7p\" (UniqueName: \"kubernetes.io/projected/d005bc2d-58d9-4ca1-8fda-935a6569e953-kube-api-access-98k7p\") pod \"apiserver-7bbb656c7d-m97vv\" (UID: \"d005bc2d-58d9-4ca1-8fda-935a6569e953\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.891494 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.895443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqwls\" (UniqueName: \"kubernetes.io/projected/02925596-31b2-4a1c-a387-4244cbf714dc-kube-api-access-dqwls\") pod \"console-operator-58897d9998-lkkvm\" (UID: \"02925596-31b2-4a1c-a387-4244cbf714dc\") " pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.898839 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.919012 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.939646 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.958467 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.967934 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.979526 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.995611 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:46 crc kubenswrapper[4687]: I0314 09:00:46.998246 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.035527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpv9h\" (UniqueName: \"kubernetes.io/projected/d5694044-0b34-45e7-ab8d-a140eaf37b70-kube-api-access-vpv9h\") pod \"machine-api-operator-5694c8668f-7hnql\" (UID: \"d5694044-0b34-45e7-ab8d-a140eaf37b70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.055109 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.058867 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.064128 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrm9\" (UniqueName: \"kubernetes.io/projected/9104920c-a1ce-4042-aa76-35aca642996c-kube-api-access-vdrm9\") pod \"cluster-samples-operator-665b6dd947-hkv5c\" (UID: \"9104920c-a1ce-4042-aa76-35aca642996c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.079790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.087996 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.098313 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.102977 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.118784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.118905 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.138370 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.147260 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.158749 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.165892 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:47 crc kubenswrapper[4687]: W0314 09:00:47.178001 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc5b3351_9222_4a86_a305_b11ac78717d5.slice/crio-8f6803d432bf3ef534820729daf26008c055823389ba94a7084383cd472bbeaa WatchSource:0}: Error finding container 8f6803d432bf3ef534820729daf26008c055823389ba94a7084383cd472bbeaa: Status 404 returned error can't find the container with id 8f6803d432bf3ef534820729daf26008c055823389ba94a7084383cd472bbeaa Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.179427 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.183426 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.198907 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.220241 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.238134 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.258489 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.278806 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.278968 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hnql"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.297982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.318810 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.339183 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.356466 4687 request.go:700] Waited for 1.015903843s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.362139 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.362795 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.380071 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.393166 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lkkvm"] Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.398752 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 09:00:47 crc kubenswrapper[4687]: W0314 09:00:47.417429 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02925596_31b2_4a1c_a387_4244cbf714dc.slice/crio-44c40288822f5827055e979e917c9c6d08e6c1969402f2f6906e632aa9c57bdb WatchSource:0}: Error finding container 44c40288822f5827055e979e917c9c6d08e6c1969402f2f6906e632aa9c57bdb: Status 404 returned error can't find the container with id 44c40288822f5827055e979e917c9c6d08e6c1969402f2f6906e632aa9c57bdb Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.417841 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.448764 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.458871 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.477237 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.498290 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.518507 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.537805 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.558490 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.578231 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.599000 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.604794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" event={"ID":"1a6bc76d-4c58-46bc-82f2-c25f27871f31","Type":"ContainerStarted","Data":"b471785414f1472029f62a186c62ae32617efcc644b65d04de36aef2fa035ebe"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.604837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" event={"ID":"1a6bc76d-4c58-46bc-82f2-c25f27871f31","Type":"ContainerStarted","Data":"5b35f7cf3ba54e418eb00263ab79125204b9e1eeff43b27381f6d20587d3e1c8"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.604850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" event={"ID":"1a6bc76d-4c58-46bc-82f2-c25f27871f31","Type":"ContainerStarted","Data":"c30b622c91746cba939a0daa8348178b656f0907a6a413e9b8f6c67a358992b7"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.607237 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" event={"ID":"bc5b3351-9222-4a86-a305-b11ac78717d5","Type":"ContainerStarted","Data":"5cd1c1b1695b8a7495381cac44ad3d9af033c41c7d932b6026cce1721d018f42"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.607278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" event={"ID":"bc5b3351-9222-4a86-a305-b11ac78717d5","Type":"ContainerStarted","Data":"8f6803d432bf3ef534820729daf26008c055823389ba94a7084383cd472bbeaa"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.607484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.608872 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" event={"ID":"9104920c-a1ce-4042-aa76-35aca642996c","Type":"ContainerStarted","Data":"d6eb88c23327b29ecce6e537bf89b39eeda9f5d03fe042161ba82d2d115c00a6"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.609178 4687 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bnmqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.609215 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.610405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" event={"ID":"6be1042b-d88e-402e-9c1d-2b258aa67d9f","Type":"ContainerStarted","Data":"fad532b12477788dcd3656c822317204d061ebdb8147e27b29d9d8e43955847d"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.610447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" event={"ID":"6be1042b-d88e-402e-9c1d-2b258aa67d9f","Type":"ContainerStarted","Data":"8728481ebed9e14232b5cb0f51f24a2b242e47f5384f124f8dfb9fc2c2c56c9e"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.614480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" event={"ID":"02925596-31b2-4a1c-a387-4244cbf714dc","Type":"ContainerStarted","Data":"c20ef9f2cc40a5b015bca454c74edc32145d4806dffaf02b6793b732d0255734"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.614541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" event={"ID":"02925596-31b2-4a1c-a387-4244cbf714dc","Type":"ContainerStarted","Data":"44c40288822f5827055e979e917c9c6d08e6c1969402f2f6906e632aa9c57bdb"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.614696 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.615633 4687 generic.go:334] "Generic (PLEG): container finished" podID="d005bc2d-58d9-4ca1-8fda-935a6569e953" containerID="a73504322bad5c95800b81d29737827233cb4855564de6068bc1ff6c2984b990" exitCode=0 Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.615740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" event={"ID":"d005bc2d-58d9-4ca1-8fda-935a6569e953","Type":"ContainerDied","Data":"a73504322bad5c95800b81d29737827233cb4855564de6068bc1ff6c2984b990"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.615764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" event={"ID":"d005bc2d-58d9-4ca1-8fda-935a6569e953","Type":"ContainerStarted","Data":"aca06b302a0021d6f687df8637bec3836e8dbf3c917232daadfd279cf7e8624a"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.615852 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-lkkvm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.615880 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" podUID="02925596-31b2-4a1c-a387-4244cbf714dc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.619476 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" event={"ID":"d5694044-0b34-45e7-ab8d-a140eaf37b70","Type":"ContainerStarted","Data":"4799ad51c28d6010ca7b032f0a25389274a7f46774f6a1235a96019ac0fafbd7"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.619510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" event={"ID":"d5694044-0b34-45e7-ab8d-a140eaf37b70","Type":"ContainerStarted","Data":"aadbc820db1de965c39e5b80feec12944d831a5684b14e61a8daa6e3065d0bd0"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.621415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" event={"ID":"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5","Type":"ContainerStarted","Data":"94dc3a5657c46609c7c87df713814094c2c0a591cb869b3465392812bc3972c2"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.621475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" event={"ID":"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5","Type":"ContainerStarted","Data":"bb9b1db690576a0c93617ecc96d2094eb9c28ba1901a07f90cc9727d97c02e38"} Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.621492 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.624290 4687 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-msfwg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.624359 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.625040 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.638077 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.657757 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.677866 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.698730 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.718668 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.735816 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.735862 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.735832 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.736063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.739216 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.758593 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.777904 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.800253 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.818506 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.837975 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.858741 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.878517 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.898265 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.918532 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.938832 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.958253 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.977810 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 09:00:47 crc kubenswrapper[4687]: I0314 09:00:47.999076 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.017981 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.038474 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.058142 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.078801 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.097984 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.119009 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.138004 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.158189 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.178598 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.198694 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.218305 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.238128 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.258578 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.277542 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.297503 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.318801 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.338193 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.356454 4687 ???:1] "http: TLS handshake error from 192.168.126.11:41428: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.356605 4687 request.go:700] Waited for 1.948499703s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.357905 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.381780 4687 ???:1] "http: TLS handshake error from 192.168.126.11:41440: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.383522 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.399053 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.418812 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.459764 4687 ???:1] "http: TLS handshake error from 192.168.126.11:41446: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.463575 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh72q\" (UniqueName: \"kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q\") pod \"console-f9d7485db-4bm6l\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.479516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v458v\" (UniqueName: \"kubernetes.io/projected/3a5910cc-d61b-4384-a9a1-49104c3f337f-kube-api-access-v458v\") pod \"multus-admission-controller-857f4d67dd-v4phn\" (UID: \"3a5910cc-d61b-4384-a9a1-49104c3f337f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.492170 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlknm\" (UniqueName: \"kubernetes.io/projected/53609d3c-0f7c-4413-898c-8d4e6f47db22-kube-api-access-wlknm\") pod \"dns-operator-744455d44c-9q7lp\" (UID: \"53609d3c-0f7c-4413-898c-8d4e6f47db22\") " pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.510279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.525616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.531176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mptk\" (UniqueName: \"kubernetes.io/projected/cfb2873a-d7cb-4b1b-8f54-23e4380142a1-kube-api-access-2mptk\") pod \"apiserver-76f77b778f-bbmr4\" (UID: \"cfb2873a-d7cb-4b1b-8f54-23e4380142a1\") " pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.537808 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.540384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8sg\" (UniqueName: \"kubernetes.io/projected/b37ac934-0137-4459-87aa-ade97e608134-kube-api-access-xs8sg\") pod \"authentication-operator-69f744f599-zzmgp\" (UID: \"b37ac934-0137-4459-87aa-ade97e608134\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.557876 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53744: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.558809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjm4\" (UniqueName: \"kubernetes.io/projected/a0893729-52eb-4339-83bb-e7ec8ba388b7-kube-api-access-5sjm4\") pod \"etcd-operator-b45778765-brwhw\" (UID: \"a0893729-52eb-4339-83bb-e7ec8ba388b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.561242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.575222 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53750: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.577068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fzr2\" (UniqueName: \"kubernetes.io/projected/6b74e63b-7771-4f32-9fca-0e112597d97e-kube-api-access-5fzr2\") pod \"downloads-7954f5f757-7578x\" (UID: \"6b74e63b-7771-4f32-9fca-0e112597d97e\") " pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.619198 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.647239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.648614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" event={"ID":"9104920c-a1ce-4042-aa76-35aca642996c","Type":"ContainerStarted","Data":"5206e41f6775caca4d856d32867ec43b6ac4658ffe80f265bc945558b9873517"} Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.648652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" event={"ID":"9104920c-a1ce-4042-aa76-35aca642996c","Type":"ContainerStarted","Data":"65e45f7f5c3397d4b30e1edcafd9afae5bee9274f3be9935f2015ce3a5a4b784"} Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651353 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651406 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwqd\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-kube-api-access-hzwqd\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9965g\" (UniqueName: \"kubernetes.io/projected/1db4faa2-dff4-461a-8a84-1ed84fdfb60e-kube-api-access-9965g\") pod \"migrator-59844c95c7-vznm6\" (UID: \"1db4faa2-dff4-461a-8a84-1ed84fdfb60e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651533 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tlp\" (UniqueName: \"kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651581 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651597 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6dea719-0142-44d7-9687-2ddd66192bfc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651647 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: E0314 09:00:48.651702 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.151683712 +0000 UTC m=+234.139924087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651807 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651899 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f72ae864-b1b7-4041-b254-5b3c7004124c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.651974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72ae864-b1b7-4041-b254-5b3c7004124c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652041 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgmm\" (UniqueName: \"kubernetes.io/projected/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-kube-api-access-9hgmm\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54s2j\" (UniqueName: \"kubernetes.io/projected/b6dea719-0142-44d7-9687-2ddd66192bfc-kube-api-access-54s2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652146 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6dea719-0142-44d7-9687-2ddd66192bfc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg8pn\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.652292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.665816 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.665838 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" event={"ID":"d005bc2d-58d9-4ca1-8fda-935a6569e953","Type":"ContainerStarted","Data":"0b51ab931b2165676fc12598ea6153c7fad2caf03075961bf14d362c3e4c3300"} Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.681908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" event={"ID":"d5694044-0b34-45e7-ab8d-a140eaf37b70","Type":"ContainerStarted","Data":"f9c11abaa8f7b85a2a4248fd177cc3ae4f5d683bb93408484ed85f66d985ebbc"} Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.694800 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53760: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.694975 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.699489 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.699712 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.715227 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.726843 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753520 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-key\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjsw\" (UniqueName: \"kubernetes.io/projected/63f0431e-f11b-4a67-8e2d-e0c95b496767-kube-api-access-nhjsw\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753825 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-node-bootstrap-token\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74c5\" (UniqueName: \"kubernetes.io/projected/df79f631-0239-4da7-b688-afdec4d4a8bd-kube-api-access-t74c5\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.753905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvscv\" (UniqueName: \"kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8841f369-d0f7-46bc-ab50-3e45edce43d6-proxy-tls\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754178 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwqd\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-kube-api-access-hzwqd\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-registration-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9965g\" (UniqueName: \"kubernetes.io/projected/1db4faa2-dff4-461a-8a84-1ed84fdfb60e-kube-api-access-9965g\") pod \"migrator-59844c95c7-vznm6\" (UID: \"1db4faa2-dff4-461a-8a84-1ed84fdfb60e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754390 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbqv\" (UniqueName: \"kubernetes.io/projected/704bd537-4cff-430f-9701-6a377bc4eee8-kube-api-access-tdbqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: E0314 09:00:48.754441 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.254413586 +0000 UTC m=+234.242654031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxrm\" (UniqueName: \"kubernetes.io/projected/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-kube-api-access-9jxrm\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad13d0-49af-4f56-902f-89427f02819b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754662 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-socket-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-plugins-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtm9\" (UniqueName: \"kubernetes.io/projected/b6012153-f812-4b0e-9e91-4c503e1701d4-kube-api-access-4wtm9\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqnh\" (UniqueName: \"kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh\") pod \"auto-csr-approver-29557980-mm8hd\" (UID: \"17f93f38-eae8-494e-b879-4240e2712982\") " pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a8707de-c464-40f6-a3c9-44f298fa48e9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754822 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsjk\" (UniqueName: \"kubernetes.io/projected/eab57439-05c9-4de4-b825-8020f99d6a1a-kube-api-access-ldsjk\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754860 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a561d4a9-4310-422d-b7d8-3262db5fd689-config\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754937 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-csi-data-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754958 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59bnp\" (UniqueName: \"kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.754982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ad13d0-49af-4f56-902f-89427f02819b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fl4\" (UniqueName: \"kubernetes.io/projected/8841f369-d0f7-46bc-ab50-3e45edce43d6-kube-api-access-69fl4\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755046 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755070 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-proxy-tls\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755116 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755137 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55885a91-cc27-4b32-b85b-5c722d0f7e02-metrics-tls\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a561d4a9-4310-422d-b7d8-3262db5fd689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-images\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755268 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-stats-auth\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755385 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-srv-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755406 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55885a91-cc27-4b32-b85b-5c722d0f7e02-trusted-ca\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755476 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01336d4f-31f4-40ef-8864-f9b1234c2240-metrics-tls\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72ae864-b1b7-4041-b254-5b3c7004124c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755534 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvnhb\" (UniqueName: \"kubernetes.io/projected/52648cea-0c14-4bc7-9b0a-af61d9fee4db-kube-api-access-xvnhb\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgmm\" (UniqueName: \"kubernetes.io/projected/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-kube-api-access-9hgmm\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54s2j\" (UniqueName: \"kubernetes.io/projected/b6dea719-0142-44d7-9687-2ddd66192bfc-kube-api-access-54s2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df79f631-0239-4da7-b688-afdec4d4a8bd-serving-cert\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755732 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnll\" (UniqueName: \"kubernetes.io/projected/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-kube-api-access-ccnll\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eab57439-05c9-4de4-b825-8020f99d6a1a-cert\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-cabundle\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755828 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755897 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-webhook-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7b8l\" (UniqueName: \"kubernetes.io/projected/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-kube-api-access-n7b8l\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.755981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-default-certificate\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg8pn\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-mountpoint-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756162 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8707de-c464-40f6-a3c9-44f298fa48e9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756199 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmchm\" (UniqueName: \"kubernetes.io/projected/5bed6528-69c2-4144-9a38-65e7d06fe3e5-kube-api-access-rmchm\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756396 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-srv-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-service-ca-bundle\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-certs\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756489 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cj9z\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-kube-api-access-9cj9z\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcpzg\" (UniqueName: \"kubernetes.io/projected/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-kube-api-access-dcpzg\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756514 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tlp\" (UniqueName: \"kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756602 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756630 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6dea719-0142-44d7-9687-2ddd66192bfc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13956215-c64d-402b-9fac-e8deb16c0ea5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704bd537-4cff-430f-9701-6a377bc4eee8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704bd537-4cff-430f-9701-6a377bc4eee8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756830 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54ad13d0-49af-4f56-902f-89427f02819b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756872 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01336d4f-31f4-40ef-8864-f9b1234c2240-config-volume\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a561d4a9-4310-422d-b7d8-3262db5fd689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-metrics-certs\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55ls\" (UniqueName: \"kubernetes.io/projected/13956215-c64d-402b-9fac-e8deb16c0ea5-kube-api-access-j55ls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f72ae864-b1b7-4041-b254-5b3c7004124c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757222 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzf85\" (UniqueName: \"kubernetes.io/projected/01336d4f-31f4-40ef-8864-f9b1234c2240-kube-api-access-wzf85\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63f0431e-f11b-4a67-8e2d-e0c95b496767-tmpfs\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757356 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpps\" (UniqueName: \"kubernetes.io/projected/bb8484b0-580c-46c6-a95d-4b12b3909834-kube-api-access-2vpps\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8707de-c464-40f6-a3c9-44f298fa48e9-config\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757530 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df79f631-0239-4da7-b688-afdec4d4a8bd-config\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.757557 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6dea719-0142-44d7-9687-2ddd66192bfc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.758346 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.759184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.760852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: E0314 09:00:48.761760 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.261714866 +0000 UTC m=+234.249955311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.763711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.765053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6dea719-0142-44d7-9687-2ddd66192bfc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.756776 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.770682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.770818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.772833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.779216 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72ae864-b1b7-4041-b254-5b3c7004124c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.781200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6dea719-0142-44d7-9687-2ddd66192bfc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.786401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.789886 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.791914 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.792997 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.793961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.794215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.795975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.796302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.798026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.799752 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.800421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgmm\" (UniqueName: \"kubernetes.io/projected/b0599ba3-328d-430a-9bd0-a0a6336dbe6f-kube-api-access-9hgmm\") pod \"openshift-config-operator-7777fb866f-r47bq\" (UID: \"b0599ba3-328d-430a-9bd0-a0a6336dbe6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.800900 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f72ae864-b1b7-4041-b254-5b3c7004124c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.802984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.818294 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.824942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.831943 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.836885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54s2j\" (UniqueName: \"kubernetes.io/projected/b6dea719-0142-44d7-9687-2ddd66192bfc-kube-api-access-54s2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdwdz\" (UID: \"b6dea719-0142-44d7-9687-2ddd66192bfc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.845522 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvscv\" (UniqueName: \"kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858362 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8841f369-d0f7-46bc-ab50-3e45edce43d6-proxy-tls\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-registration-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbqv\" (UniqueName: \"kubernetes.io/projected/704bd537-4cff-430f-9701-6a377bc4eee8-kube-api-access-tdbqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jxrm\" (UniqueName: \"kubernetes.io/projected/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-kube-api-access-9jxrm\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858519 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad13d0-49af-4f56-902f-89427f02819b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-socket-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858550 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-plugins-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtm9\" (UniqueName: \"kubernetes.io/projected/b6012153-f812-4b0e-9e91-4c503e1701d4-kube-api-access-4wtm9\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqnh\" (UniqueName: \"kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh\") pod \"auto-csr-approver-29557980-mm8hd\" (UID: \"17f93f38-eae8-494e-b879-4240e2712982\") " pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a8707de-c464-40f6-a3c9-44f298fa48e9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsjk\" (UniqueName: \"kubernetes.io/projected/eab57439-05c9-4de4-b825-8020f99d6a1a-kube-api-access-ldsjk\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a561d4a9-4310-422d-b7d8-3262db5fd689-config\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858649 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-csi-data-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59bnp\" (UniqueName: \"kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ad13d0-49af-4f56-902f-89427f02819b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fl4\" (UniqueName: \"kubernetes.io/projected/8841f369-d0f7-46bc-ab50-3e45edce43d6-kube-api-access-69fl4\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-proxy-tls\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55885a91-cc27-4b32-b85b-5c722d0f7e02-metrics-tls\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a561d4a9-4310-422d-b7d8-3262db5fd689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-images\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-stats-auth\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-srv-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55885a91-cc27-4b32-b85b-5c722d0f7e02-trusted-ca\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01336d4f-31f4-40ef-8864-f9b1234c2240-metrics-tls\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvnhb\" (UniqueName: \"kubernetes.io/projected/52648cea-0c14-4bc7-9b0a-af61d9fee4db-kube-api-access-xvnhb\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df79f631-0239-4da7-b688-afdec4d4a8bd-serving-cert\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnll\" (UniqueName: \"kubernetes.io/projected/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-kube-api-access-ccnll\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858967 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eab57439-05c9-4de4-b825-8020f99d6a1a-cert\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.858990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859019 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-cabundle\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-webhook-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859079 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-default-certificate\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859125 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7b8l\" (UniqueName: \"kubernetes.io/projected/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-kube-api-access-n7b8l\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8707de-c464-40f6-a3c9-44f298fa48e9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859167 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-mountpoint-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859191 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmchm\" (UniqueName: \"kubernetes.io/projected/5bed6528-69c2-4144-9a38-65e7d06fe3e5-kube-api-access-rmchm\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859223 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859238 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-service-ca-bundle\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-certs\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-srv-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcpzg\" (UniqueName: \"kubernetes.io/projected/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-kube-api-access-dcpzg\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cj9z\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-kube-api-access-9cj9z\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859359 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859374 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704bd537-4cff-430f-9701-6a377bc4eee8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859392 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13956215-c64d-402b-9fac-e8deb16c0ea5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704bd537-4cff-430f-9701-6a377bc4eee8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54ad13d0-49af-4f56-902f-89427f02819b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859460 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a561d4a9-4310-422d-b7d8-3262db5fd689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859475 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-metrics-certs\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01336d4f-31f4-40ef-8864-f9b1234c2240-config-volume\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55ls\" (UniqueName: \"kubernetes.io/projected/13956215-c64d-402b-9fac-e8deb16c0ea5-kube-api-access-j55ls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63f0431e-f11b-4a67-8e2d-e0c95b496767-tmpfs\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzf85\" (UniqueName: \"kubernetes.io/projected/01336d4f-31f4-40ef-8864-f9b1234c2240-kube-api-access-wzf85\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859621 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpps\" (UniqueName: \"kubernetes.io/projected/bb8484b0-580c-46c6-a95d-4b12b3909834-kube-api-access-2vpps\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859636 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8707de-c464-40f6-a3c9-44f298fa48e9-config\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859650 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df79f631-0239-4da7-b688-afdec4d4a8bd-config\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-key\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjsw\" (UniqueName: \"kubernetes.io/projected/63f0431e-f11b-4a67-8e2d-e0c95b496767-kube-api-access-nhjsw\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-node-bootstrap-token\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.859710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74c5\" (UniqueName: \"kubernetes.io/projected/df79f631-0239-4da7-b688-afdec4d4a8bd-kube-api-access-t74c5\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: E0314 09:00:48.859906 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.359892607 +0000 UTC m=+234.348132982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.860778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ad13d0-49af-4f56-902f-89427f02819b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.880019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df79f631-0239-4da7-b688-afdec4d4a8bd-serving-cert\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.880033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-cabundle\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.885492 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.885908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-srv-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.886080 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13956215-c64d-402b-9fac-e8deb16c0ea5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.886682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.887437 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-service-ca-bundle\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.889443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.893793 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.894692 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-registration-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.894892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63f0431e-f11b-4a67-8e2d-e0c95b496767-tmpfs\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.897645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8707de-c464-40f6-a3c9-44f298fa48e9-config\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.898389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df79f631-0239-4da7-b688-afdec4d4a8bd-config\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.902998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01336d4f-31f4-40ef-8864-f9b1234c2240-metrics-tls\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.903089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-proxy-tls\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.903143 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704bd537-4cff-430f-9701-6a377bc4eee8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.903779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-csi-data-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.906117 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01336d4f-31f4-40ef-8864-f9b1234c2240-config-volume\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.906394 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55885a91-cc27-4b32-b85b-5c722d0f7e02-trusted-ca\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.906923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-mountpoint-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.907760 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8841f369-d0f7-46bc-ab50-3e45edce43d6-proxy-tls\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.907887 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a561d4a9-4310-422d-b7d8-3262db5fd689-config\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.908634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.908751 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-socket-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.908835 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-plugins-dir\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.913019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8841f369-d0f7-46bc-ab50-3e45edce43d6-images\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.922789 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.925989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8707de-c464-40f6-a3c9-44f298fa48e9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.926503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-metrics-certs\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.927257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwqd\" (UniqueName: \"kubernetes.io/projected/f72ae864-b1b7-4041-b254-5b3c7004124c-kube-api-access-hzwqd\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhcx\" (UID: \"f72ae864-b1b7-4041-b254-5b3c7004124c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.927621 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53768: no serving certificate available for the kubelet" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.935618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5bed6528-69c2-4144-9a38-65e7d06fe3e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.938082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.939007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-srv-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.939567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-node-bootstrap-token\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.945542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a561d4a9-4310-422d-b7d8-3262db5fd689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.945925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg8pn\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.945945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704bd537-4cff-430f-9701-6a377bc4eee8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.946260 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad13d0-49af-4f56-902f-89427f02819b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.946652 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eab57439-05c9-4de4-b825-8020f99d6a1a-cert\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.946871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-default-certificate\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.947352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9965g\" (UniqueName: \"kubernetes.io/projected/1db4faa2-dff4-461a-8a84-1ed84fdfb60e-kube-api-access-9965g\") pod \"migrator-59844c95c7-vznm6\" (UID: \"1db4faa2-dff4-461a-8a84-1ed84fdfb60e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.949183 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-stats-auth\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.949758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0431e-f11b-4a67-8e2d-e0c95b496767-webhook-cert\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.950201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.951611 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.951786 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tlp\" (UniqueName: \"kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp\") pod \"oauth-openshift-558db77b4-khk5g\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.952245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52648cea-0c14-4bc7-9b0a-af61d9fee4db-certs\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.955457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55885a91-cc27-4b32-b85b-5c722d0f7e02-metrics-tls\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.957713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb8484b0-580c-46c6-a95d-4b12b3909834-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.958125 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.958151 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6012153-f812-4b0e-9e91-4c503e1701d4-signing-key\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.960506 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:48 crc kubenswrapper[4687]: E0314 09:00:48.960824 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.460813566 +0000 UTC m=+234.449053941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:48 crc kubenswrapper[4687]: I0314 09:00:48.981513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74c5\" (UniqueName: \"kubernetes.io/projected/df79f631-0239-4da7-b688-afdec4d4a8bd-kube-api-access-t74c5\") pod \"service-ca-operator-777779d784-j4vjd\" (UID: \"df79f631-0239-4da7-b688-afdec4d4a8bd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.001452 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fl4\" (UniqueName: \"kubernetes.io/projected/8841f369-d0f7-46bc-ab50-3e45edce43d6-kube-api-access-69fl4\") pod \"machine-config-operator-74547568cd-26vhz\" (UID: \"8841f369-d0f7-46bc-ab50-3e45edce43d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.006558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.018366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvscv\" (UniqueName: \"kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv\") pod \"marketplace-operator-79b997595-tnnb6\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.040793 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvnhb\" (UniqueName: \"kubernetes.io/projected/52648cea-0c14-4bc7-9b0a-af61d9fee4db-kube-api-access-xvnhb\") pod \"machine-config-server-zkkk7\" (UID: \"52648cea-0c14-4bc7-9b0a-af61d9fee4db\") " pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.061558 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.062176 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.562157425 +0000 UTC m=+234.550397800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.064873 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zkkk7" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.084519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.091566 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnll\" (UniqueName: \"kubernetes.io/projected/43b2da2a-0ee7-46a6-80d7-51df3ba97cb8-kube-api-access-ccnll\") pod \"machine-config-controller-84d6567774-r9lhj\" (UID: \"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.096921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmchm\" (UniqueName: \"kubernetes.io/projected/5bed6528-69c2-4144-9a38-65e7d06fe3e5-kube-api-access-rmchm\") pod \"catalog-operator-68c6474976-vtflz\" (UID: \"5bed6528-69c2-4144-9a38-65e7d06fe3e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.098669 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.106050 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.106148 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.114147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a561d4a9-4310-422d-b7d8-3262db5fd689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pmxgf\" (UID: \"a561d4a9-4310-422d-b7d8-3262db5fd689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.121575 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54ad13d0-49af-4f56-902f-89427f02819b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l4kf\" (UID: \"54ad13d0-49af-4f56-902f-89427f02819b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.144433 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbqv\" (UniqueName: \"kubernetes.io/projected/704bd537-4cff-430f-9701-6a377bc4eee8-kube-api-access-tdbqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-stkn4\" (UID: \"704bd537-4cff-430f-9701-6a377bc4eee8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.155553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.158266 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9q7lp"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.170537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzf85\" (UniqueName: \"kubernetes.io/projected/01336d4f-31f4-40ef-8864-f9b1234c2240-kube-api-access-wzf85\") pod \"dns-default-72dmh\" (UID: \"01336d4f-31f4-40ef-8864-f9b1234c2240\") " pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.172549 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.173566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.173962 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.673944792 +0000 UTC m=+234.662185167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.183922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bbmr4"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.195664 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.211400 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.212428 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.228547 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpps\" (UniqueName: \"kubernetes.io/projected/bb8484b0-580c-46c6-a95d-4b12b3909834-kube-api-access-2vpps\") pod \"olm-operator-6b444d44fb-mqbz4\" (UID: \"bb8484b0-580c-46c6-a95d-4b12b3909834\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.249082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jxrm\" (UniqueName: \"kubernetes.io/projected/2a3e6c78-f32a-475d-b7cd-7e7719357bcd-kube-api-access-9jxrm\") pod \"package-server-manager-789f6589d5-vmpw5\" (UID: \"2a3e6c78-f32a-475d-b7cd-7e7719357bcd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.249785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59bnp\" (UniqueName: \"kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp\") pod \"collect-profiles-29557980-l5tgx\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.263111 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.268287 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.272975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55ls\" (UniqueName: \"kubernetes.io/projected/13956215-c64d-402b-9fac-e8deb16c0ea5-kube-api-access-j55ls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n67l8\" (UID: \"13956215-c64d-402b-9fac-e8deb16c0ea5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.274713 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.275101 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.775084956 +0000 UTC m=+234.763325331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.276458 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.282122 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.300919 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.301977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjsw\" (UniqueName: \"kubernetes.io/projected/63f0431e-f11b-4a67-8e2d-e0c95b496767-kube-api-access-nhjsw\") pod \"packageserver-d55dfcdfc-2xtpr\" (UID: \"63f0431e-f11b-4a67-8e2d-e0c95b496767\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.302466 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v4phn"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.307011 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqnh\" (UniqueName: \"kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh\") pod \"auto-csr-approver-29557980-mm8hd\" (UID: \"17f93f38-eae8-494e-b879-4240e2712982\") " pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.312480 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53770: no serving certificate available for the kubelet" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.315191 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.330115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cj9z\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-kube-api-access-9cj9z\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.333593 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.337099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a8707de-c464-40f6-a3c9-44f298fa48e9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6lcvm\" (UID: \"5a8707de-c464-40f6-a3c9-44f298fa48e9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.340598 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.347631 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.349917 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7b8l\" (UniqueName: \"kubernetes.io/projected/a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e-kube-api-access-n7b8l\") pod \"csi-hostpathplugin-pdncg\" (UID: \"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e\") " pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.374670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.375514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.375810 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.87579909 +0000 UTC m=+234.864039465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.380123 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcpzg\" (UniqueName: \"kubernetes.io/projected/6d13a0e5-c5cc-4f39-9d07-ba4986f118e2-kube-api-access-dcpzg\") pod \"router-default-5444994796-t7bl7\" (UID: \"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2\") " pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.399202 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsjk\" (UniqueName: \"kubernetes.io/projected/eab57439-05c9-4de4-b825-8020f99d6a1a-kube-api-access-ldsjk\") pod \"ingress-canary-qmmnh\" (UID: \"eab57439-05c9-4de4-b825-8020f99d6a1a\") " pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.447391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtm9\" (UniqueName: \"kubernetes.io/projected/b6012153-f812-4b0e-9e91-4c503e1701d4-kube-api-access-4wtm9\") pod \"service-ca-9c57cc56f-hw27r\" (UID: \"b6012153-f812-4b0e-9e91-4c503e1701d4\") " pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:49 crc kubenswrapper[4687]: W0314 09:00:49.464892 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfb2873a_d7cb_4b1b_8f54_23e4380142a1.slice/crio-7205be8ae7461978a5ff8b8d3773f4d679c40afe0bb3195149bf6c3aa005736c WatchSource:0}: Error finding container 7205be8ae7461978a5ff8b8d3773f4d679c40afe0bb3195149bf6c3aa005736c: Status 404 returned error can't find the container with id 7205be8ae7461978a5ff8b8d3773f4d679c40afe0bb3195149bf6c3aa005736c Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.474315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55885a91-cc27-4b32-b85b-5c722d0f7e02-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pvwkj\" (UID: \"55885a91-cc27-4b32-b85b-5c722d0f7e02\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.476046 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.477766 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:49.976427342 +0000 UTC m=+234.964667717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.487676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.501442 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.515094 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.523041 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.554668 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.577740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.578079 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.078064009 +0000 UTC m=+235.066304384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.591246 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.656961 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qmmnh" Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.678502 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.678935 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.178920266 +0000 UTC m=+235.167160641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.788060 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zkkk7" event={"ID":"52648cea-0c14-4bc7-9b0a-af61d9fee4db","Type":"ContainerStarted","Data":"5a8dff3a488115e3974e2da03fc28f2b728acbfbcf0e5f3aa4d24c4ea6db695c"} Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.790247 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.799866 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.29985252 +0000 UTC m=+235.288092895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.807101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bm6l" event={"ID":"407cd8a9-1364-412b-9d41-7c66fc18bd5e","Type":"ContainerStarted","Data":"57f6d4d3e919c55661c7d577cdc2ae7cfa1a939289a8e40f77e5e448093372d7"} Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.828921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" event={"ID":"3a5910cc-d61b-4384-a9a1-49104c3f337f","Type":"ContainerStarted","Data":"d92ce4e5e64c67e5e52915ac21fda747cecc3a7641ae06ba7998bea13b93888b"} Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.848930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" event={"ID":"cfb2873a-d7cb-4b1b-8f54-23e4380142a1","Type":"ContainerStarted","Data":"7205be8ae7461978a5ff8b8d3773f4d679c40afe0bb3195149bf6c3aa005736c"} Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.855984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" event={"ID":"53609d3c-0f7c-4413-898c-8d4e6f47db22","Type":"ContainerStarted","Data":"6559bd0758ddf3a019c2c07b98e16f25bb8b391e42bc55308c529217f5c2e0fe"} Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.897081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.897297 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.397265612 +0000 UTC m=+235.385505997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.897548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:49 crc kubenswrapper[4687]: E0314 09:00:49.898031 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.39802083 +0000 UTC m=+235.386261205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.923223 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7578x"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.942582 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zzmgp"] Mar 14 09:00:49 crc kubenswrapper[4687]: I0314 09:00:49.998248 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.000174 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.500143109 +0000 UTC m=+235.488383494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.003888 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53786: no serving certificate available for the kubelet" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.100144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.100517 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.600503474 +0000 UTC m=+235.588743849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.152503 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd"] Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.202250 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.202913 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.702897619 +0000 UTC m=+235.691137984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.244521 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz"] Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.290777 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-24tmc" podStartSLOduration=163.290756725 podStartE2EDuration="2m43.290756725s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:50.25666419 +0000 UTC m=+235.244904565" watchObservedRunningTime="2026-03-14 09:00:50.290756725 +0000 UTC m=+235.278997100" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.294213 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" podStartSLOduration=163.29420538 podStartE2EDuration="2m43.29420538s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:50.290008276 +0000 UTC m=+235.278248651" watchObservedRunningTime="2026-03-14 09:00:50.29420538 +0000 UTC m=+235.282445755" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.304020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.304588 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.804576337 +0000 UTC m=+235.792816712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.405101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.405591 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:50.905567087 +0000 UTC m=+235.893807462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.515242 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.515557 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.01554499 +0000 UTC m=+236.003785365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.617042 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.617812 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.117797872 +0000 UTC m=+236.106038247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.719687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.719998 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.219987252 +0000 UTC m=+236.208227627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.826750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.827155 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.327139985 +0000 UTC m=+236.315380360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.837254 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hnql" podStartSLOduration=163.837229705 podStartE2EDuration="2m43.837229705s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:50.827058113 +0000 UTC m=+235.815298478" watchObservedRunningTime="2026-03-14 09:00:50.837229705 +0000 UTC m=+235.825470090" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.857459 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" podStartSLOduration=163.857441096 podStartE2EDuration="2m43.857441096s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:50.856469402 +0000 UTC m=+235.844709787" watchObservedRunningTime="2026-03-14 09:00:50.857441096 +0000 UTC m=+235.845681461" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.869519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7578x" event={"ID":"6b74e63b-7771-4f32-9fca-0e112597d97e","Type":"ContainerStarted","Data":"db8e40e7ec554869c20db0cd6261f725230e9659cc033adec845d2bdfb42866c"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.869564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7578x" event={"ID":"6b74e63b-7771-4f32-9fca-0e112597d97e","Type":"ContainerStarted","Data":"20c99eac112a9b7415f6f32b2297a55d057fb1e8d9fb94e469841e72e6b689f0"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.869808 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.877840 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-7578x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.877891 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7578x" podUID="6b74e63b-7771-4f32-9fca-0e112597d97e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.878660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zkkk7" event={"ID":"52648cea-0c14-4bc7-9b0a-af61d9fee4db","Type":"ContainerStarted","Data":"1510f09e83a79a1af2b6089f018ba4280b410afca77058d0a0cf7351aecb7997"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.883703 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t7bl7" event={"ID":"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2","Type":"ContainerStarted","Data":"b7234ed90f54f74d489d42460bd83af0c69ec02bc87de99fcec0ca9e16fc9987"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.883748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t7bl7" event={"ID":"6d13a0e5-c5cc-4f39-9d07-ba4986f118e2","Type":"ContainerStarted","Data":"190fcd975a8e3c12171b218a91aa62edcf813c54d9e05e41ce4c96ee1afbca51"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.890180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" event={"ID":"b6dea719-0142-44d7-9687-2ddd66192bfc","Type":"ContainerStarted","Data":"4a0319f66002b4d2d0a5fa549ed88f3c624159fa74408d4c8df400626c235fd0"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.900135 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" event={"ID":"3a5910cc-d61b-4384-a9a1-49104c3f337f","Type":"ContainerStarted","Data":"7d3f7181166785a9b4b429d8adbfd1cb4b738b4d982e0e03a22f3c092841b6ce"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.904399 4687 generic.go:334] "Generic (PLEG): container finished" podID="cfb2873a-d7cb-4b1b-8f54-23e4380142a1" containerID="6807a126859ceafbe761ed1ef256ad14765be902afd74843dd8349533f2bcb77" exitCode=0 Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.904658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" event={"ID":"cfb2873a-d7cb-4b1b-8f54-23e4380142a1","Type":"ContainerDied","Data":"6807a126859ceafbe761ed1ef256ad14765be902afd74843dd8349533f2bcb77"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.907728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" event={"ID":"df79f631-0239-4da7-b688-afdec4d4a8bd","Type":"ContainerStarted","Data":"ccb0dba9ee69b141b6a8e657a690e3fd40be9c104546b61e03600d885b27874a"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.910266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bm6l" event={"ID":"407cd8a9-1364-412b-9d41-7c66fc18bd5e","Type":"ContainerStarted","Data":"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.916245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" event={"ID":"b37ac934-0137-4459-87aa-ade97e608134","Type":"ContainerStarted","Data":"f484b3929385811014eed9d1209155d6b33d70e7260c0fcfb56112b7726b8731"} Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.926495 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-lkkvm" podStartSLOduration=163.926477935 podStartE2EDuration="2m43.926477935s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:50.898383849 +0000 UTC m=+235.886624224" watchObservedRunningTime="2026-03-14 09:00:50.926477935 +0000 UTC m=+235.914718300" Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.928162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:50 crc kubenswrapper[4687]: E0314 09:00:50.928545 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.428522495 +0000 UTC m=+236.416762870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.928668 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-brwhw"] Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.974090 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4"] Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.985753 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r47bq"] Mar 14 09:00:50 crc kubenswrapper[4687]: I0314 09:00:50.985993 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-khk5g"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.004168 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.029570 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.030741 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.530724236 +0000 UTC m=+236.518964611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.072123 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kp5dx" podStartSLOduration=164.072107371 podStartE2EDuration="2m44.072107371s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.070746787 +0000 UTC m=+236.058987152" watchObservedRunningTime="2026-03-14 09:00:51.072107371 +0000 UTC m=+236.060347746" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.131378 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.132890 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.632875695 +0000 UTC m=+236.621116070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.147085 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkv5c" podStartSLOduration=164.147065146 podStartE2EDuration="2m44.147065146s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.115616788 +0000 UTC m=+236.103857163" watchObservedRunningTime="2026-03-14 09:00:51.147065146 +0000 UTC m=+236.135305521" Mar 14 09:00:51 crc kubenswrapper[4687]: W0314 09:00:51.148608 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d0c6355_63a3_4e53_b8f6_283e5ef456ed.slice/crio-0496dd3ee038ceaafda16e6e907c77a2bac31f94ef986a664dd754f52986c961 WatchSource:0}: Error finding container 0496dd3ee038ceaafda16e6e907c77a2bac31f94ef986a664dd754f52986c961: Status 404 returned error can't find the container with id 0496dd3ee038ceaafda16e6e907c77a2bac31f94ef986a664dd754f52986c961 Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.223282 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" podStartSLOduration=164.223262433 podStartE2EDuration="2m44.223262433s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.217820148 +0000 UTC m=+236.206060523" watchObservedRunningTime="2026-03-14 09:00:51.223262433 +0000 UTC m=+236.211502808" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.232786 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.233370 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.733325122 +0000 UTC m=+236.721565497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.266789 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.273249 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.279537 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.305560 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53792: no serving certificate available for the kubelet" Mar 14 09:00:51 crc kubenswrapper[4687]: W0314 09:00:51.305908 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db4faa2_dff4_461a_8a84_1ed84fdfb60e.slice/crio-7afea96f50548b2fd31cdee5dedaf4a9b652adef25eba7b43d25a3ab3fd67370 WatchSource:0}: Error finding container 7afea96f50548b2fd31cdee5dedaf4a9b652adef25eba7b43d25a3ab3fd67370: Status 404 returned error can't find the container with id 7afea96f50548b2fd31cdee5dedaf4a9b652adef25eba7b43d25a3ab3fd67370 Mar 14 09:00:51 crc kubenswrapper[4687]: W0314 09:00:51.307066 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf72ae864_b1b7_4041_b254_5b3c7004124c.slice/crio-edb4262c5b691d08388561d9cbf318753a9871a68a1dcd80b3d9470f1c211c42 WatchSource:0}: Error finding container edb4262c5b691d08388561d9cbf318753a9871a68a1dcd80b3d9470f1c211c42: Status 404 returned error can't find the container with id edb4262c5b691d08388561d9cbf318753a9871a68a1dcd80b3d9470f1c211c42 Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.333989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.334465 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.834441806 +0000 UTC m=+236.822682241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.365517 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.372774 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-t7bl7" podStartSLOduration=164.372758765 podStartE2EDuration="2m44.372758765s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.371246037 +0000 UTC m=+236.359486442" watchObservedRunningTime="2026-03-14 09:00:51.372758765 +0000 UTC m=+236.360999140" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.417103 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4bm6l" podStartSLOduration=164.417089312 podStartE2EDuration="2m44.417089312s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.416930418 +0000 UTC m=+236.405170813" watchObservedRunningTime="2026-03-14 09:00:51.417089312 +0000 UTC m=+236.405329687" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.435511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.435960 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:51.935941529 +0000 UTC m=+236.924181904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.451489 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7578x" podStartSLOduration=164.451466783 podStartE2EDuration="2m44.451466783s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.449641819 +0000 UTC m=+236.437882224" watchObservedRunningTime="2026-03-14 09:00:51.451466783 +0000 UTC m=+236.439707158" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.499986 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zkkk7" podStartSLOduration=5.499965854 podStartE2EDuration="5.499965854s" podCreationTimestamp="2026-03-14 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.487914216 +0000 UTC m=+236.476154591" watchObservedRunningTime="2026-03-14 09:00:51.499965854 +0000 UTC m=+236.488206229" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.504318 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72dmh"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.516864 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.518974 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.523572 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:51 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:51 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:51 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.523825 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.536518 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.537591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.537973 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.037958805 +0000 UTC m=+237.026199190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.544290 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-mm8hd"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.549090 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.557215 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.567713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.572134 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pdncg"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.575079 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.577607 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.579710 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.586304 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.639860 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.640172 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.140157136 +0000 UTC m=+237.128397511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.661514 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.692781 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qmmnh"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.719685 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hw27r"] Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.741958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.742288 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.242273454 +0000 UTC m=+237.230513829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.756103 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5"] Mar 14 09:00:51 crc kubenswrapper[4687]: W0314 09:00:51.840651 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a3e6c78_f32a_475d_b7cd_7e7719357bcd.slice/crio-c6832a0c72b4558eecfeeec85a3bd15d53fa0eb3a52133047e8001526b43df25 WatchSource:0}: Error finding container c6832a0c72b4558eecfeeec85a3bd15d53fa0eb3a52133047e8001526b43df25: Status 404 returned error can't find the container with id c6832a0c72b4558eecfeeec85a3bd15d53fa0eb3a52133047e8001526b43df25 Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.842845 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.843064 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.343028258 +0000 UTC m=+237.331268633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.843319 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.843639 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.343596572 +0000 UTC m=+237.331836947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: W0314 09:00:51.852164 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6012153_f812_4b0e_9e91_4c503e1701d4.slice/crio-d4b546678440675c7ae342283977096f5a554dbb57c0daf58efffaea1e5d0f89 WatchSource:0}: Error finding container d4b546678440675c7ae342283977096f5a554dbb57c0daf58efffaea1e5d0f89: Status 404 returned error can't find the container with id d4b546678440675c7ae342283977096f5a554dbb57c0daf58efffaea1e5d0f89 Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.931292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" event={"ID":"3a5910cc-d61b-4384-a9a1-49104c3f337f","Type":"ContainerStarted","Data":"6c01970634d57e107cf27db51ffc3917d8693e1044b6418664cc26cb9a0bd014"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.943395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" event={"ID":"cfb2873a-d7cb-4b1b-8f54-23e4380142a1","Type":"ContainerStarted","Data":"cccc173920b66d9a6e615f47e2155d63d6238feb8c677dbedd247e76a023922a"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.943894 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.944217 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.444195683 +0000 UTC m=+237.432436058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.944417 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:51 crc kubenswrapper[4687]: E0314 09:00:51.944684 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.444677145 +0000 UTC m=+237.432917520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.947125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" event={"ID":"df79f631-0239-4da7-b688-afdec4d4a8bd","Type":"ContainerStarted","Data":"3385601ffb8a8b7dfc99b338f667ed00b728d4761afe86b7ba8b03c01747ae09"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.951488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerStarted","Data":"545b592103aadceb27b15a8be10b75dc46c6edfa3a74be023e2ff9315636b2af"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.957953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" event={"ID":"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e","Type":"ContainerStarted","Data":"ab9949c6cf77536244e2db1e8e1380cf51c7a601828060ec80b0007dad86d21b"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.962373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" event={"ID":"17f93f38-eae8-494e-b879-4240e2712982","Type":"ContainerStarted","Data":"3ddb8e89843b3746619d2b6457bafb677911aaf2b1b238a67fb81474081e9ce5"} Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.970906 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j4vjd" podStartSLOduration=164.970885034 podStartE2EDuration="2m44.970885034s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.97073382 +0000 UTC m=+236.958974205" watchObservedRunningTime="2026-03-14 09:00:51.970885034 +0000 UTC m=+236.959125409" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.979609 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-v4phn" podStartSLOduration=164.979585779 podStartE2EDuration="2m44.979585779s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:51.950393016 +0000 UTC m=+236.938633411" watchObservedRunningTime="2026-03-14 09:00:51.979585779 +0000 UTC m=+236.967826154" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.996413 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:51 crc kubenswrapper[4687]: I0314 09:00:51.997052 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.008433 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.012210 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0599ba3-328d-430a-9bd0-a0a6336dbe6f" containerID="ad3bc8593f9e3e8f298dc6363ee80ee59673cc89e50c3e4abcfeac9d4249d0d3" exitCode=0 Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.012326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" event={"ID":"b0599ba3-328d-430a-9bd0-a0a6336dbe6f","Type":"ContainerDied","Data":"ad3bc8593f9e3e8f298dc6363ee80ee59673cc89e50c3e4abcfeac9d4249d0d3"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.012414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" event={"ID":"b0599ba3-328d-430a-9bd0-a0a6336dbe6f","Type":"ContainerStarted","Data":"8cb88b14f231cb0801146017b4e98a35206b92e303ce82b90da6c2bb55590bf5"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.015255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qmmnh" event={"ID":"eab57439-05c9-4de4-b825-8020f99d6a1a","Type":"ContainerStarted","Data":"0fbbea76579391a9e2c17588fbbe7bea8b61dc7a652f24dc2faeb0af276057bf"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.024823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" event={"ID":"53609d3c-0f7c-4413-898c-8d4e6f47db22","Type":"ContainerStarted","Data":"70f861cfc8aa4c0183e0fe0c2f7726cf044961176616484b9864ba1ffd99a2f3"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.024865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" event={"ID":"53609d3c-0f7c-4413-898c-8d4e6f47db22","Type":"ContainerStarted","Data":"293f273925e4b8160422c4a42c8c3fc403842de1d9b7a107f4de367df97827fb"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.049353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" event={"ID":"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8","Type":"ContainerStarted","Data":"6b943acf13e8425ea0db663a2cebfbebf3faeee9a2b0e3196bd9c65781819749"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.050825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.053662 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.553636663 +0000 UTC m=+237.541877048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.072182 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" event={"ID":"2a3e6c78-f32a-475d-b7cd-7e7719357bcd","Type":"ContainerStarted","Data":"c6832a0c72b4558eecfeeec85a3bd15d53fa0eb3a52133047e8001526b43df25"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.078413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" event={"ID":"bb8484b0-580c-46c6-a95d-4b12b3909834","Type":"ContainerStarted","Data":"5844b8c84051f5129984a2cc356277045dd36f7da079b688ab26649e46ae15d6"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.078452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" event={"ID":"bb8484b0-580c-46c6-a95d-4b12b3909834","Type":"ContainerStarted","Data":"ddf78411eda08c7219691bcc7721c48ed2b6d2ed067d705507c0b83f7e2b8b1b"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.080886 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.084190 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9q7lp" podStartSLOduration=165.084173229 podStartE2EDuration="2m45.084173229s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.048997218 +0000 UTC m=+237.037237593" watchObservedRunningTime="2026-03-14 09:00:52.084173229 +0000 UTC m=+237.072413604" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.098120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" event={"ID":"b37ac934-0137-4459-87aa-ade97e608134","Type":"ContainerStarted","Data":"f8be2802b6f46fecd51252a4ff65c93ba930f9d3d055b372e405b333652b9065"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.103446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" event={"ID":"63f0431e-f11b-4a67-8e2d-e0c95b496767","Type":"ContainerStarted","Data":"ef7ce4adfd72fac29623cc01e8566270f3c827199833bfcb153a60188ea79ca6"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.103734 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" podStartSLOduration=165.103716393 podStartE2EDuration="2m45.103716393s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.101884107 +0000 UTC m=+237.090124502" watchObservedRunningTime="2026-03-14 09:00:52.103716393 +0000 UTC m=+237.091956768" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.107275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" event={"ID":"a0893729-52eb-4339-83bb-e7ec8ba388b7","Type":"ContainerStarted","Data":"8dd5d0f04a3fda4e2944760a780445520f72a82048b1d1711d510b283b115980"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.107328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" event={"ID":"a0893729-52eb-4339-83bb-e7ec8ba388b7","Type":"ContainerStarted","Data":"b07a55643d924d405e0ad9202cad1a147feec7d35e53dddc29a2d13a55330c6d"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.109555 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqbz4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.109596 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" podUID="bb8484b0-580c-46c6-a95d-4b12b3909834" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.110758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" event={"ID":"b6dea719-0142-44d7-9687-2ddd66192bfc","Type":"ContainerStarted","Data":"73918e1f55cc29736c7cfc27933d94ff1c1ef6aadf681fd6f9e40e784121f464"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.150773 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72dmh" event={"ID":"01336d4f-31f4-40ef-8864-f9b1234c2240","Type":"ContainerStarted","Data":"da4fb3b2e52ec5df675fc3a17b6e105db8c7703dd703aca4ab2e0e290649136c"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.152227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.152291 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zzmgp" podStartSLOduration=165.152275565 podStartE2EDuration="2m45.152275565s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.150828279 +0000 UTC m=+237.139068664" watchObservedRunningTime="2026-03-14 09:00:52.152275565 +0000 UTC m=+237.140515940" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.154259 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.654243914 +0000 UTC m=+237.642484289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.179637 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdwdz" podStartSLOduration=165.179612292 podStartE2EDuration="2m45.179612292s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.174198638 +0000 UTC m=+237.162439013" watchObservedRunningTime="2026-03-14 09:00:52.179612292 +0000 UTC m=+237.167852667" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.205406 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" event={"ID":"704bd537-4cff-430f-9701-6a377bc4eee8","Type":"ContainerStarted","Data":"fb80509dfa7b9cb6807604d684abae76e15ed37b00c31f3e1bdcd38b9a59e9c9"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.205452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" event={"ID":"704bd537-4cff-430f-9701-6a377bc4eee8","Type":"ContainerStarted","Data":"6744184b2812b15be95070b36479ca6cb2bee2cb27670c7897e5fee55076d878"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.224528 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-brwhw" podStartSLOduration=165.224506694 podStartE2EDuration="2m45.224506694s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.208827066 +0000 UTC m=+237.197067461" watchObservedRunningTime="2026-03-14 09:00:52.224506694 +0000 UTC m=+237.212747079" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.242650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" event={"ID":"55885a91-cc27-4b32-b85b-5c722d0f7e02","Type":"ContainerStarted","Data":"d22133abf4db97c996b817cbb4b31e5ac72d952c636dedb87a14dcf9edd42e57"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.253432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" event={"ID":"1db4faa2-dff4-461a-8a84-1ed84fdfb60e","Type":"ContainerStarted","Data":"6072a906c8e104e5f72f501d39253d50d2b3ac06bb44563654e85ce183d6781a"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.253473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" event={"ID":"1db4faa2-dff4-461a-8a84-1ed84fdfb60e","Type":"ContainerStarted","Data":"7afea96f50548b2fd31cdee5dedaf4a9b652adef25eba7b43d25a3ab3fd67370"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.255074 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.258750 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.7587291 +0000 UTC m=+237.746969465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.302873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" event={"ID":"5bed6528-69c2-4144-9a38-65e7d06fe3e5","Type":"ContainerStarted","Data":"1d30e903d5db6ab21b2feac0cd8993ade609ab3c067ed572f20da34245fb9d13"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.302921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" event={"ID":"5bed6528-69c2-4144-9a38-65e7d06fe3e5","Type":"ContainerStarted","Data":"add8d7678817e74b671ef3909fac0600fa808e60044e87e14c2e3cebc6ffc853"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.303784 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.310502 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vtflz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.310584 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" podUID="5bed6528-69c2-4144-9a38-65e7d06fe3e5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.328667 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stkn4" podStartSLOduration=165.328653042 podStartE2EDuration="2m45.328653042s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.224203866 +0000 UTC m=+237.212444251" watchObservedRunningTime="2026-03-14 09:00:52.328653042 +0000 UTC m=+237.316893417" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.330275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" event={"ID":"54ad13d0-49af-4f56-902f-89427f02819b","Type":"ContainerStarted","Data":"93d8a6b319c6cede0049191c05c2971376091f2c46707ef37d113823ef5f2669"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.333121 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" podStartSLOduration=165.333103822 podStartE2EDuration="2m45.333103822s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.328086628 +0000 UTC m=+237.316327003" watchObservedRunningTime="2026-03-14 09:00:52.333103822 +0000 UTC m=+237.321344197" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.356843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.357246 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.85722905 +0000 UTC m=+237.845469425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.370706 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" event={"ID":"f72ae864-b1b7-4041-b254-5b3c7004124c","Type":"ContainerStarted","Data":"f8b8bad4420d848c993930aac6a5d396c3f98bad8dab0c9e40f41dcc56537d8e"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.370758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" event={"ID":"f72ae864-b1b7-4041-b254-5b3c7004124c","Type":"ContainerStarted","Data":"edb4262c5b691d08388561d9cbf318753a9871a68a1dcd80b3d9470f1c211c42"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.396601 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhcx" podStartSLOduration=165.396549733 podStartE2EDuration="2m45.396549733s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.395108708 +0000 UTC m=+237.383349083" watchObservedRunningTime="2026-03-14 09:00:52.396549733 +0000 UTC m=+237.384790108" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.444449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" event={"ID":"8841f369-d0f7-46bc-ab50-3e45edce43d6","Type":"ContainerStarted","Data":"4fe2496040841a9462263b14aada5e19174c3fd51d54ca617f5791f779cd8dbc"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.444518 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" event={"ID":"8841f369-d0f7-46bc-ab50-3e45edce43d6","Type":"ContainerStarted","Data":"3dcf26bc0d47dab677229b8360570662a7fd091976450b8019d1e7d227e93e16"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.458058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.459603 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:52.959583294 +0000 UTC m=+237.947823669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.463572 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" podStartSLOduration=165.463555483 podStartE2EDuration="2m45.463555483s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.462823144 +0000 UTC m=+237.451063519" watchObservedRunningTime="2026-03-14 09:00:52.463555483 +0000 UTC m=+237.451795858" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.478790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" event={"ID":"b6012153-f812-4b0e-9e91-4c503e1701d4","Type":"ContainerStarted","Data":"d4b546678440675c7ae342283977096f5a554dbb57c0daf58efffaea1e5d0f89"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.480893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" event={"ID":"5d0c6355-63a3-4e53-b8f6-283e5ef456ed","Type":"ContainerStarted","Data":"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.480936 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" event={"ID":"5d0c6355-63a3-4e53-b8f6-283e5ef456ed","Type":"ContainerStarted","Data":"0496dd3ee038ceaafda16e6e907c77a2bac31f94ef986a664dd754f52986c961"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.488054 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.499183 4687 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-khk5g container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.499236 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.513275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" event={"ID":"5a8707de-c464-40f6-a3c9-44f298fa48e9","Type":"ContainerStarted","Data":"7df0696ac10e016a8434bf34e616865425f758e58990ea518132e34a20add1e2"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.515307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" event={"ID":"13956215-c64d-402b-9fac-e8deb16c0ea5","Type":"ContainerStarted","Data":"bdb59d4702284fed76023005199de1613017d6a34944018dbabfb21b87e9fec5"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.528622 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:52 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:52 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:52 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.528885 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.528958 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" event={"ID":"a561d4a9-4310-422d-b7d8-3262db5fd689","Type":"ContainerStarted","Data":"b0f765573dc9c91c0945636646415fdfc5b01ff67580e45001b248b69bd5a098"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.533820 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" podStartSLOduration=165.533801232 podStartE2EDuration="2m45.533801232s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.526767998 +0000 UTC m=+237.515008373" watchObservedRunningTime="2026-03-14 09:00:52.533801232 +0000 UTC m=+237.522041607" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.534002 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" event={"ID":"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8","Type":"ContainerStarted","Data":"e167e6552e1ed890fdb6e29784b4da4617ede00ab7d41201941e6bfd500e6b51"} Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.535244 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-7578x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.535291 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7578x" podUID="6b74e63b-7771-4f32-9fca-0e112597d97e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.560254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.562151 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.062136133 +0000 UTC m=+238.050376608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.579076 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" podStartSLOduration=165.579059612 podStartE2EDuration="2m45.579059612s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.577938194 +0000 UTC m=+237.566178569" watchObservedRunningTime="2026-03-14 09:00:52.579059612 +0000 UTC m=+237.567299977" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.629626 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" podStartSLOduration=165.629611433 podStartE2EDuration="2m45.629611433s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:52.629406279 +0000 UTC m=+237.617646664" watchObservedRunningTime="2026-03-14 09:00:52.629611433 +0000 UTC m=+237.617851808" Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.662791 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.664954 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.164932478 +0000 UTC m=+238.153172853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.768945 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.769864 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.269848606 +0000 UTC m=+238.258088981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.870034 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.870134 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.370114088 +0000 UTC m=+238.358354463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.870216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.870538 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.370530298 +0000 UTC m=+238.358770673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.971187 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.971437 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.471323754 +0000 UTC m=+238.459564119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:52 crc kubenswrapper[4687]: I0314 09:00:52.971696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:52 crc kubenswrapper[4687]: E0314 09:00:52.972043 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.472034332 +0000 UTC m=+238.460274707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.072903 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.073258 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.573229297 +0000 UTC m=+238.561469672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.073299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.073676 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.573662098 +0000 UTC m=+238.561902463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.174351 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.174465 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.674437413 +0000 UTC m=+238.662677788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.174644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.174985 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.674976527 +0000 UTC m=+238.663216902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.275865 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.276061 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.776026329 +0000 UTC m=+238.764266704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.276263 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.276645 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.776632513 +0000 UTC m=+238.764872888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.377872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.378007 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.877990204 +0000 UTC m=+238.866230579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.378324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.378575 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.878564407 +0000 UTC m=+238.866804782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.479451 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.479635 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.979618379 +0000 UTC m=+238.967858754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.479821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.480092 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:53.980084321 +0000 UTC m=+238.968324696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.520794 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:53 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:53 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:53 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.520848 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.551914 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" event={"ID":"a561d4a9-4310-422d-b7d8-3262db5fd689","Type":"ContainerStarted","Data":"9bf9dbfa32bf30d9df4a53b3f13c9daec4adda3f4a11f04fd45c66fbaecd2fb5"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.553454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" event={"ID":"63f0431e-f11b-4a67-8e2d-e0c95b496767","Type":"ContainerStarted","Data":"183099973e3e25bfba744ec82ba502f648b398dfdb843fb3943ccb86e087ef26"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.553601 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.558927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerStarted","Data":"c3a8a25485abc852dcbab8b6a0eea628b95b7b17401b499efeb63a28552f3367"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.559145 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.560134 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tnnb6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.560172 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.569499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qmmnh" event={"ID":"eab57439-05c9-4de4-b825-8020f99d6a1a","Type":"ContainerStarted","Data":"545b65531f3b9defdf2e70cb067afef7e3a9981227fc131f5ef88d6705e8e0d5"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.575240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" event={"ID":"1db4faa2-dff4-461a-8a84-1ed84fdfb60e","Type":"ContainerStarted","Data":"f45001596d35d51b26661a2169f6d42de3e23b202f294e39c8481d4fcd1de785"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.578358 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pmxgf" podStartSLOduration=166.578325024 podStartE2EDuration="2m46.578325024s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.576728084 +0000 UTC m=+238.564968459" watchObservedRunningTime="2026-03-14 09:00:53.578325024 +0000 UTC m=+238.566565399" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.580485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.580549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" event={"ID":"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e","Type":"ContainerStarted","Data":"661e7bf1f0aef7d80447fe7db481040a80107d1f3d1f67a81cc7ef3e76d5f953"} Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.580717 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.080699613 +0000 UTC m=+239.068940018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.582368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" event={"ID":"cfb2873a-d7cb-4b1b-8f54-23e4380142a1","Type":"ContainerStarted","Data":"9f95dd93534e4595debfb8f0f9d8ce67d0bee0984d58616a9189f9bafd8e0616"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.599131 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" event={"ID":"5a8707de-c464-40f6-a3c9-44f298fa48e9","Type":"ContainerStarted","Data":"2d6eb1fc43fe9e72681b72971ea20c6febaef6ee400407d1448c48a6683a8b83"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.609725 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" event={"ID":"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8","Type":"ContainerStarted","Data":"d0c8eb3db79fed9a7cd0ada8857577dd05021b0009e222c3d6273cf2522943c1"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.615399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" event={"ID":"54ad13d0-49af-4f56-902f-89427f02819b","Type":"ContainerStarted","Data":"71d18c417838a86a037bcc9d13ed9231b23a46c23b362e69180b327fad84e1f2"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.619588 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" podStartSLOduration=166.619564194 podStartE2EDuration="2m46.619564194s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.613581196 +0000 UTC m=+238.601821561" watchObservedRunningTime="2026-03-14 09:00:53.619564194 +0000 UTC m=+238.607804569" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.637263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" event={"ID":"2a3e6c78-f32a-475d-b7cd-7e7719357bcd","Type":"ContainerStarted","Data":"5fcd2f43fd6fe41e3c84a6bcf51e521374afd7f369a6fe46ee3c185ea38a559a"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.637310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" event={"ID":"2a3e6c78-f32a-475d-b7cd-7e7719357bcd","Type":"ContainerStarted","Data":"23af6685cb47d7e941385d0f8416fa1aa9713545a6c0a4cf3fbe5cdb9fdcd34d"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.637374 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.639964 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vznm6" podStartSLOduration=166.639947949 podStartE2EDuration="2m46.639947949s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.639100288 +0000 UTC m=+238.627340683" watchObservedRunningTime="2026-03-14 09:00:53.639947949 +0000 UTC m=+238.628188324" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.651605 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" event={"ID":"b0599ba3-328d-430a-9bd0-a0a6336dbe6f","Type":"ContainerStarted","Data":"deb914591da7e18b63aeab93a138ea312c956adb2323b98268c86d69b3fac5c3"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.651654 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.669900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n67l8" event={"ID":"13956215-c64d-402b-9fac-e8deb16c0ea5","Type":"ContainerStarted","Data":"f86555a63779c957311170ca33346023b76c1a64adde9f5eda6ddff3a96f3537"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.682152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" event={"ID":"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8","Type":"ContainerStarted","Data":"b5148b016650af838dfb50ab209f0df5025f38859a3ad399cc7bf57bd46c2650"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.682200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9lhj" event={"ID":"43b2da2a-0ee7-46a6-80d7-51df3ba97cb8","Type":"ContainerStarted","Data":"5fd0406efbe54e86d1f1aa324aaeef9f463fc55ea3717df7420a5d914343938f"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.682684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.684396 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.18438205 +0000 UTC m=+239.172622505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.690237 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" podStartSLOduration=166.690219854 podStartE2EDuration="2m46.690219854s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.661307268 +0000 UTC m=+238.649547643" watchObservedRunningTime="2026-03-14 09:00:53.690219854 +0000 UTC m=+238.678460229" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.690924 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qmmnh" podStartSLOduration=7.690918871 podStartE2EDuration="7.690918871s" podCreationTimestamp="2026-03-14 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.690172723 +0000 UTC m=+238.678413098" watchObservedRunningTime="2026-03-14 09:00:53.690918871 +0000 UTC m=+238.679159246" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.693615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" event={"ID":"b6012153-f812-4b0e-9e91-4c503e1701d4","Type":"ContainerStarted","Data":"049e78dc211da21b293626cd10c7acaf41d44b6e9586fd47fae9be7b0a73ebaf"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.724988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72dmh" event={"ID":"01336d4f-31f4-40ef-8864-f9b1234c2240","Type":"ContainerStarted","Data":"5b602cf0e9b5ee0f720f25c061d2da88fd483a057a957dd467598f5e46f21e01"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.725034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72dmh" event={"ID":"01336d4f-31f4-40ef-8864-f9b1234c2240","Type":"ContainerStarted","Data":"c9c212b1d455f9ccc061b5843d613c96ed49d95b4aff6d8fd33c5ef804f4483e"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.725091 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6lcvm" podStartSLOduration=166.725076647 podStartE2EDuration="2m46.725076647s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.722603306 +0000 UTC m=+238.710843681" watchObservedRunningTime="2026-03-14 09:00:53.725076647 +0000 UTC m=+238.713317022" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.725606 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-72dmh" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.736982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-26vhz" event={"ID":"8841f369-d0f7-46bc-ab50-3e45edce43d6","Type":"ContainerStarted","Data":"126c71df10074f99bb5fc0d189b52340ea8067f763cd0fcae6c7489fcaa95a4e"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.748763 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" podStartSLOduration=166.748747964 podStartE2EDuration="2m46.748747964s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.748269331 +0000 UTC m=+238.736509706" watchObservedRunningTime="2026-03-14 09:00:53.748747964 +0000 UTC m=+238.736988339" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.779125 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" podStartSLOduration=53.779098374 podStartE2EDuration="53.779098374s" podCreationTimestamp="2026-03-14 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.776461819 +0000 UTC m=+238.764702184" watchObservedRunningTime="2026-03-14 09:00:53.779098374 +0000 UTC m=+238.767338749" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.787219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.790156 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.288329113 +0000 UTC m=+239.276569568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.798390 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" event={"ID":"55885a91-cc27-4b32-b85b-5c722d0f7e02","Type":"ContainerStarted","Data":"3a20f25e67b3da70d0c78789f418a9e6b35c79f6c462b78a8425374caf307f6a"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.798439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" event={"ID":"55885a91-cc27-4b32-b85b-5c722d0f7e02","Type":"ContainerStarted","Data":"eda469ccfbbeeaa16c04fe83d9dad03801d16c5aed04a29dcb18422a2a75d6fb"} Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.812633 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqbz4" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.812830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m97vv" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.812897 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vtflz" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.815784 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" podStartSLOduration=166.815772172 podStartE2EDuration="2m46.815772172s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.815066715 +0000 UTC m=+238.803307090" watchObservedRunningTime="2026-03-14 09:00:53.815772172 +0000 UTC m=+238.804012547" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.822231 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.822450 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerName="controller-manager" containerID="cri-o://94dc3a5657c46609c7c87df713814094c2c0a591cb869b3465392812bc3972c2" gracePeriod=30 Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.845962 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l4kf" podStartSLOduration=166.84594763 podStartE2EDuration="2m46.84594763s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.84476814 +0000 UTC m=+238.833008515" watchObservedRunningTime="2026-03-14 09:00:53.84594763 +0000 UTC m=+238.834188005" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.847780 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.890451 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.890645 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerName="route-controller-manager" containerID="cri-o://5cd1c1b1695b8a7495381cac44ad3d9af033c41c7d932b6026cce1721d018f42" gracePeriod=30 Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.891076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.933907 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.433888207 +0000 UTC m=+239.422128582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.963322 4687 ???:1] "http: TLS handshake error from 192.168.126.11:53794: no serving certificate available for the kubelet" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.991046 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" podStartSLOduration=166.991026872 podStartE2EDuration="2m46.991026872s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.962873495 +0000 UTC m=+238.951113870" watchObservedRunningTime="2026-03-14 09:00:53.991026872 +0000 UTC m=+238.979267247" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.991177 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pvwkj" podStartSLOduration=166.991173646 podStartE2EDuration="2m46.991173646s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:53.989460813 +0000 UTC m=+238.977701198" watchObservedRunningTime="2026-03-14 09:00:53.991173646 +0000 UTC m=+238.979414021" Mar 14 09:00:53 crc kubenswrapper[4687]: I0314 09:00:53.992212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:53 crc kubenswrapper[4687]: E0314 09:00:53.992599 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.492585421 +0000 UTC m=+239.480825796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.083012 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hw27r" podStartSLOduration=167.082994609 podStartE2EDuration="2m47.082994609s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:54.079602795 +0000 UTC m=+239.067843170" watchObservedRunningTime="2026-03-14 09:00:54.082994609 +0000 UTC m=+239.071234984" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.093927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.094246 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.594234797 +0000 UTC m=+239.582475172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.113870 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.113925 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.163572 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.194792 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.194890 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.694875199 +0000 UTC m=+239.683115574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.195124 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.195398 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.695391882 +0000 UTC m=+239.683632257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.269664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.297911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.298753 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.79873744 +0000 UTC m=+239.786977815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.374261 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-72dmh" podStartSLOduration=8.37424661 podStartE2EDuration="8.37424661s" podCreationTimestamp="2026-03-14 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:54.371749659 +0000 UTC m=+239.359990034" watchObservedRunningTime="2026-03-14 09:00:54.37424661 +0000 UTC m=+239.362486985" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.400228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.400598 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:54.900586082 +0000 UTC m=+239.888826457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.500771 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.501174 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.001158743 +0000 UTC m=+239.989399118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.520495 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:54 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:54 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:54 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.520548 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.527198 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.528058 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.533040 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.551165 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.554056 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2xtpr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.554106 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" podUID="63f0431e-f11b-4a67-8e2d-e0c95b496767" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.602585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.602919 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.102904042 +0000 UTC m=+240.091144417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.701941 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703185 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.703597 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.203573175 +0000 UTC m=+240.191813600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9p4\" (UniqueName: \"kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.703981 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.704326 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.204312222 +0000 UTC m=+240.192552597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.705284 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.727961 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.801694 4687 generic.go:334] "Generic (PLEG): container finished" podID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerID="94dc3a5657c46609c7c87df713814094c2c0a591cb869b3465392812bc3972c2" exitCode=0 Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.801812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" event={"ID":"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5","Type":"ContainerDied","Data":"94dc3a5657c46609c7c87df713814094c2c0a591cb869b3465392812bc3972c2"} Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.811307 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerID="5cd1c1b1695b8a7495381cac44ad3d9af033c41c7d932b6026cce1721d018f42" exitCode=0 Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.811420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" event={"ID":"bc5b3351-9222-4a86-a305-b11ac78717d5","Type":"ContainerDied","Data":"5cd1c1b1695b8a7495381cac44ad3d9af033c41c7d932b6026cce1721d018f42"} Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.813739 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.813949 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.813979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.813996 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.313975267 +0000 UTC m=+240.302215642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nqj\" (UniqueName: \"kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814197 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9p4\" (UniqueName: \"kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814262 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814588 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.814877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.824007 4687 generic.go:334] "Generic (PLEG): container finished" podID="eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" containerID="d0c8eb3db79fed9a7cd0ada8857577dd05021b0009e222c3d6273cf2522943c1" exitCode=0 Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.824502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" event={"ID":"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8","Type":"ContainerDied","Data":"d0c8eb3db79fed9a7cd0ada8857577dd05021b0009e222c3d6273cf2522943c1"} Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.825361 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tnnb6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.825394 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.860262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9p4\" (UniqueName: \"kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4\") pod \"community-operators-j8w67\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.899368 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.900255 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.920008 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.921021 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.921165 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.921187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.921219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nqj\" (UniqueName: \"kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.922142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.922954 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:54 crc kubenswrapper[4687]: E0314 09:00:54.923206 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.423192801 +0000 UTC m=+240.411433176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:54 crc kubenswrapper[4687]: I0314 09:00:54.951028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nqj\" (UniqueName: \"kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj\") pod \"certified-operators-sgxm6\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.029819 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.030183 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.53015496 +0000 UTC m=+240.518395335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.030718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.030976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.031071 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.031256 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvd5\" (UniqueName: \"kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.056453 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.556430511 +0000 UTC m=+240.544670876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.077441 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.108815 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.109838 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.117851 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.132476 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r47bq" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133037 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133252 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvd5\" (UniqueName: \"kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133389 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.133456 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.633438568 +0000 UTC m=+240.621678943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngrz\" (UniqueName: \"kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.133549 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.134164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.134200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.158063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.166954 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvd5\" (UniqueName: \"kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5\") pod \"community-operators-8f9h8\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.208363 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xtpr" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.216149 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.236137 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.236182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.236244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.236281 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngrz\" (UniqueName: \"kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.236754 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.736735925 +0000 UTC m=+240.724976350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.236956 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.237571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.237835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.268111 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.268407 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerName="controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.268430 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerName="controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.268585 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" containerName="controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.269073 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.287837 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.288638 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngrz\" (UniqueName: \"kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz\") pod \"certified-operators-chn6l\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.336572 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343014 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles\") pod \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stkcv\" (UniqueName: \"kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv\") pod \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config\") pod \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert\") pod \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343127 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca\") pod \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\" (UID: \"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert\") pod \"bc5b3351-9222-4a86-a305-b11ac78717d5\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343255 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config\") pod \"bc5b3351-9222-4a86-a305-b11ac78717d5\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca\") pod \"bc5b3351-9222-4a86-a305-b11ac78717d5\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk8lc\" (UniqueName: \"kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc\") pod \"bc5b3351-9222-4a86-a305-b11ac78717d5\" (UID: \"bc5b3351-9222-4a86-a305-b11ac78717d5\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343463 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmktq\" (UniqueName: \"kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.343550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.343582 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.843564561 +0000 UTC m=+240.831804936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.344527 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" (UID: "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.344567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config" (OuterVolumeSpecName: "config") pod "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" (UID: "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.345163 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" (UID: "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.346066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc5b3351-9222-4a86-a305-b11ac78717d5" (UID: "bc5b3351-9222-4a86-a305-b11ac78717d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.346086 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config" (OuterVolumeSpecName: "config") pod "bc5b3351-9222-4a86-a305-b11ac78717d5" (UID: "bc5b3351-9222-4a86-a305-b11ac78717d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.361277 4687 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.361561 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc" (OuterVolumeSpecName: "kube-api-access-wk8lc") pod "bc5b3351-9222-4a86-a305-b11ac78717d5" (UID: "bc5b3351-9222-4a86-a305-b11ac78717d5"). InnerVolumeSpecName "kube-api-access-wk8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.363294 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv" (OuterVolumeSpecName: "kube-api-access-stkcv") pod "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" (UID: "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5"). InnerVolumeSpecName "kube-api-access-stkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.370257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" (UID: "ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.372258 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5b3351-9222-4a86-a305-b11ac78717d5" (UID: "bc5b3351-9222-4a86-a305-b11ac78717d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.445243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.445533 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.445597 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.445715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmktq\" (UniqueName: \"kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.445771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.446704 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.446981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.452556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.452831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459722 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459822 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk8lc\" (UniqueName: \"kubernetes.io/projected/bc5b3351-9222-4a86-a305-b11ac78717d5-kube-api-access-wk8lc\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459835 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459845 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stkcv\" (UniqueName: \"kubernetes.io/projected/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-kube-api-access-stkcv\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459854 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459862 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459870 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459878 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5b3351-9222-4a86-a305-b11ac78717d5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459889 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.459896 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc5b3351-9222-4a86-a305-b11ac78717d5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.460154 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:55.960138237 +0000 UTC m=+240.948378612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.481684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmktq\" (UniqueName: \"kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq\") pod \"controller-manager-756cfdb766-69nkf\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.504234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.527106 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:55 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:55 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:55 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.527169 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.560545 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.560890 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:56.060877351 +0000 UTC m=+241.049117716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.613820 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.614196 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:55 crc kubenswrapper[4687]: W0314 09:00:55.639863 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b47421_912e_4faa_b3ed_33881459d76e.slice/crio-ccb56d6724577f904f0a379ba7554835c661867cf10932e833e326c2d71df98c WatchSource:0}: Error finding container ccb56d6724577f904f0a379ba7554835c661867cf10932e833e326c2d71df98c: Status 404 returned error can't find the container with id ccb56d6724577f904f0a379ba7554835c661867cf10932e833e326c2d71df98c Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.663106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.663666 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:56.163648675 +0000 UTC m=+241.151889050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.667452 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.764637 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.765075 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:56.265055286 +0000 UTC m=+241.253295661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.830834 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.831349 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerName="route-controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.831385 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerName="route-controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.831500 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" containerName="route-controller-manager" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.831898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.835553 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.869071 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" event={"ID":"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e","Type":"ContainerStarted","Data":"34ece67ff576f405bbcc2b13ff93942fe895a9d86442161bad7c45e4ceb74ca9"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.869124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" event={"ID":"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e","Type":"ContainerStarted","Data":"1155ca7e06f3ca14ba3b47b04ef9e4c0397acd94a39c7e69447c0135f7254212"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.872191 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbfs\" (UniqueName: \"kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.872258 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.872285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.872509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.872548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: E0314 09:00:55.873181 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:00:56.373163623 +0000 UTC m=+241.361403998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gjtxn" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.889761 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.890063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd" event={"ID":"bc5b3351-9222-4a86-a305-b11ac78717d5","Type":"ContainerDied","Data":"8f6803d432bf3ef534820729daf26008c055823389ba94a7084383cd472bbeaa"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.890121 4687 scope.go:117] "RemoveContainer" containerID="5cd1c1b1695b8a7495381cac44ad3d9af033c41c7d932b6026cce1721d018f42" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.898013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerStarted","Data":"ccb56d6724577f904f0a379ba7554835c661867cf10932e833e326c2d71df98c"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.902603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerStarted","Data":"c18c6a3cdfa88597fb2847756c6c9963ad0c24f9910b389653621c66bd362892"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.905074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.918843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" event={"ID":"ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5","Type":"ContainerDied","Data":"bb9b1db690576a0c93617ecc96d2094eb9c28ba1901a07f90cc9727d97c02e38"} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.919063 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-msfwg" Mar 14 09:00:55 crc kubenswrapper[4687]: W0314 09:00:55.933284 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b25e7f_bec3_4142_a347_886777f6a1c2.slice/crio-de790bce55ba676db93e85eb0f757ab724b40ab30b6faf19b579f156103f20e1 WatchSource:0}: Error finding container de790bce55ba676db93e85eb0f757ab724b40ab30b6faf19b579f156103f20e1: Status 404 returned error can't find the container with id de790bce55ba676db93e85eb0f757ab724b40ab30b6faf19b579f156103f20e1 Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.942941 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.950439 4687 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-14T09:00:55.361296069Z","Handler":null,"Name":""} Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.950866 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bnmqd"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.959925 4687 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.959987 4687 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.975778 4687 scope.go:117] "RemoveContainer" containerID="94dc3a5657c46609c7c87df713814094c2c0a591cb869b3465392812bc3972c2" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.977103 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.977349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.977386 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmbfs\" (UniqueName: \"kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.977415 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.977442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.981841 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.982580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.983982 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.993187 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-msfwg"] Mar 14 09:00:55 crc kubenswrapper[4687]: I0314 09:00:55.995802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.000825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.008968 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmbfs\" (UniqueName: \"kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs\") pod \"route-controller-manager-78dccf48f4-n8s22\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.011636 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.081586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.088581 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.088614 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.162016 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gjtxn\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.167684 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.201801 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.218984 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:56 crc kubenswrapper[4687]: W0314 09:00:56.222188 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ded01b8_d9df_49e5_91ad_da6b5f1fb358.slice/crio-ca7fd2b89ec1c90814fcfe31471b2a9ad7870abaa755c04464cd52d818ef043f WatchSource:0}: Error finding container ca7fd2b89ec1c90814fcfe31471b2a9ad7870abaa755c04464cd52d818ef043f: Status 404 returned error can't find the container with id ca7fd2b89ec1c90814fcfe31471b2a9ad7870abaa755c04464cd52d818ef043f Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.375794 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.385140 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.387216 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59bnp\" (UniqueName: \"kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp\") pod \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.387319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume\") pod \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.387362 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume\") pod \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\" (UID: \"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8\") " Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.388456 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume" (OuterVolumeSpecName: "config-volume") pod "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" (UID: "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.391644 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" (UID: "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.412283 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp" (OuterVolumeSpecName: "kube-api-access-59bnp") pod "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" (UID: "eca59e50-a6ce-4f7f-a81d-1e60677a7ac8"). InnerVolumeSpecName "kube-api-access-59bnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.468264 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.488541 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59bnp\" (UniqueName: \"kubernetes.io/projected/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-kube-api-access-59bnp\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.488569 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.488598 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.521733 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:56 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:56 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:56 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.521790 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:56 crc kubenswrapper[4687]: W0314 09:00:56.521823 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod945af372_2e90_40f4_80e9_be605ed6938a.slice/crio-f5d63376a2c53a88aecab8a5524339b2e7c62e06079a0c44d2f5c70227adca47 WatchSource:0}: Error finding container f5d63376a2c53a88aecab8a5524339b2e7c62e06079a0c44d2f5c70227adca47: Status 404 returned error can't find the container with id f5d63376a2c53a88aecab8a5524339b2e7c62e06079a0c44d2f5c70227adca47 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.708160 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:00:56 crc kubenswrapper[4687]: E0314 09:00:56.708760 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" containerName="collect-profiles" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.708776 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" containerName="collect-profiles" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.708921 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" containerName="collect-profiles" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.709782 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.717239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.722250 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.852247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:00:56 crc kubenswrapper[4687]: W0314 09:00:56.870888 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eed2cb6_ab8f_4447_9a76_9d238ba48d9b.slice/crio-f4a30de14f08b3751bbb465b107348b3d04183fb44a3ba4d66c20d8ab705ef61 WatchSource:0}: Error finding container f4a30de14f08b3751bbb465b107348b3d04183fb44a3ba4d66c20d8ab705ef61: Status 404 returned error can't find the container with id f4a30de14f08b3751bbb465b107348b3d04183fb44a3ba4d66c20d8ab705ef61 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.893997 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.894194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.894248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wm9\" (UniqueName: \"kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.933505 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerID="735dd163fdb008fbc19c5a7d867182a1331afe790b256f2be93f299ffa0f5432" exitCode=0 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.933745 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerDied","Data":"735dd163fdb008fbc19c5a7d867182a1331afe790b256f2be93f299ffa0f5432"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.933800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerStarted","Data":"de790bce55ba676db93e85eb0f757ab724b40ab30b6faf19b579f156103f20e1"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.946326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" event={"ID":"eca59e50-a6ce-4f7f-a81d-1e60677a7ac8","Type":"ContainerDied","Data":"6b943acf13e8425ea0db663a2cebfbebf3faeee9a2b0e3196bd9c65781819749"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.946394 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b943acf13e8425ea0db663a2cebfbebf3faeee9a2b0e3196bd9c65781819749" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.946686 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.954218 4687 generic.go:334] "Generic (PLEG): container finished" podID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerID="3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a" exitCode=0 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.954305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerDied","Data":"3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.954353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerStarted","Data":"58350fed0d7a8dc8b9a35a9a8f218cf411ebc27db43bcd393a52b97a7c63debe"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.964174 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8b47421-912e-4faa-b3ed-33881459d76e" containerID="b744b0ad5ed7fb85f660ac210df854dae5c13ef9e5507fa90d2b47abc4bc50f7" exitCode=0 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.964384 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerDied","Data":"b744b0ad5ed7fb85f660ac210df854dae5c13ef9e5507fa90d2b47abc4bc50f7"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.968824 4687 generic.go:334] "Generic (PLEG): container finished" podID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerID="33967deb856c952b5b9cc08c9b33c1b75dbaae8f415ca1565aac07df9d60a6ba" exitCode=0 Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.968877 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerDied","Data":"33967deb856c952b5b9cc08c9b33c1b75dbaae8f415ca1565aac07df9d60a6ba"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.977593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" event={"ID":"4ded01b8-d9df-49e5-91ad-da6b5f1fb358","Type":"ContainerStarted","Data":"e17be05b76c0b9665e659f8106c03e25e8f5a6b25d11026fa8a5cfe427e0670d"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.977871 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.977882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" event={"ID":"4ded01b8-d9df-49e5-91ad-da6b5f1fb358","Type":"ContainerStarted","Data":"ca7fd2b89ec1c90814fcfe31471b2a9ad7870abaa755c04464cd52d818ef043f"} Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.992024 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.995545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.995600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7wm9\" (UniqueName: \"kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.995661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.996012 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.996172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:56 crc kubenswrapper[4687]: I0314 09:00:56.999062 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" event={"ID":"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b","Type":"ContainerStarted","Data":"f4a30de14f08b3751bbb465b107348b3d04183fb44a3ba4d66c20d8ab705ef61"} Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.008764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" event={"ID":"945af372-2e90-40f4-80e9-be605ed6938a","Type":"ContainerStarted","Data":"b36b66bfe1611d1956b45e0a74f52e3a363f86f74285284dffa8cab01868f773"} Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.008837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" event={"ID":"945af372-2e90-40f4-80e9-be605ed6938a","Type":"ContainerStarted","Data":"f5d63376a2c53a88aecab8a5524339b2e7c62e06079a0c44d2f5c70227adca47"} Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.010050 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.016119 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" event={"ID":"a290d355-2ca7-42b6-a8fd-06eeb6cf1b7e","Type":"ContainerStarted","Data":"df1c294c52b0d6639b9d12250482bb6c0a6d6851ffd71cd2e62a0d784b555b60"} Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.021421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7wm9\" (UniqueName: \"kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9\") pod \"redhat-marketplace-x58t9\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.037066 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.039261 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" podStartSLOduration=3.039238975 podStartE2EDuration="3.039238975s" podCreationTimestamp="2026-03-14 09:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:57.0337776 +0000 UTC m=+242.022017975" watchObservedRunningTime="2026-03-14 09:00:57.039238975 +0000 UTC m=+242.027479350" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.116575 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.118721 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.119161 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pdncg" podStartSLOduration=11.119142353 podStartE2EDuration="11.119142353s" podCreationTimestamp="2026-03-14 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:57.106791337 +0000 UTC m=+242.095031712" watchObservedRunningTime="2026-03-14 09:00:57.119142353 +0000 UTC m=+242.107382728" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.124980 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.168151 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" podStartSLOduration=3.168109236 podStartE2EDuration="3.168109236s" podCreationTimestamp="2026-03-14 09:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:57.130801401 +0000 UTC m=+242.119041776" watchObservedRunningTime="2026-03-14 09:00:57.168109236 +0000 UTC m=+242.156349611" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.453110 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.499942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.499992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pbr\" (UniqueName: \"kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.500079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.522900 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:57 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:57 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:57 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.522959 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.585067 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.601429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.601491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pbr\" (UniqueName: \"kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.601530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.602391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.602411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.627299 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pbr\" (UniqueName: \"kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr\") pod \"redhat-marketplace-p62jj\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.700489 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.701692 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.703703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.717590 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.747669 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.748912 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5b3351-9222-4a86-a305-b11ac78717d5" path="/var/lib/kubelet/pods/bc5b3351-9222-4a86-a305-b11ac78717d5/volumes" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.749790 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5" path="/var/lib/kubelet/pods/ff8c2f09-7a3f-464c-b4d9-21aa3e6e1fe5/volumes" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.784866 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.803004 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.803147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bw9\" (UniqueName: \"kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.803201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.904410 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.904740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bw9\" (UniqueName: \"kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.904798 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.905021 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.905159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:57 crc kubenswrapper[4687]: I0314 09:00:57.921008 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bw9\" (UniqueName: \"kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9\") pod \"redhat-operators-qp9c4\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.010848 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:00:58 crc kubenswrapper[4687]: W0314 09:00:58.015042 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3d7d663_f8a3_477b_9487_3e284e3cdf6b.slice/crio-54e839a3df74d162c4090492bc16e97b6a8c8c63c0545f9e4ae822de4a50b9c4 WatchSource:0}: Error finding container 54e839a3df74d162c4090492bc16e97b6a8c8c63c0545f9e4ae822de4a50b9c4: Status 404 returned error can't find the container with id 54e839a3df74d162c4090492bc16e97b6a8c8c63c0545f9e4ae822de4a50b9c4 Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.029054 4687 generic.go:334] "Generic (PLEG): container finished" podID="764f93b4-9c3b-400d-b508-4534689e51a7" containerID="464471f7c17e607e6a5db8c7ddbe6ce3def70e5fc5cdbdf8839a3b14b1016f3c" exitCode=0 Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.029116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerDied","Data":"464471f7c17e607e6a5db8c7ddbe6ce3def70e5fc5cdbdf8839a3b14b1016f3c"} Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.029143 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerStarted","Data":"12593782f52474abff4d3b25fc926ad4dc4c7f84814e0fe6fd7a3ce268b1ec21"} Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.034776 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerStarted","Data":"54e839a3df74d162c4090492bc16e97b6a8c8c63c0545f9e4ae822de4a50b9c4"} Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.036798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.037110 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" event={"ID":"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b","Type":"ContainerStarted","Data":"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415"} Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.066452 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" podStartSLOduration=171.066432337 podStartE2EDuration="2m51.066432337s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:58.059543456 +0000 UTC m=+243.047783851" watchObservedRunningTime="2026-03-14 09:00:58.066432337 +0000 UTC m=+243.054672712" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.099162 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.100218 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.117891 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.216034 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6vj\" (UniqueName: \"kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.216067 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.216114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.317058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6vj\" (UniqueName: \"kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.317108 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.317148 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.317770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.317906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.339275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6vj\" (UniqueName: \"kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj\") pod \"redhat-operators-5tn7f\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.420398 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.515398 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.516319 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.518162 4687 patch_prober.go:28] interesting pod/console-f9d7485db-4bm6l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.518222 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bm6l" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.520086 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:58 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:58 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:58 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.520239 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.539033 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.539069 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.544807 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.549608 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:00:58 crc kubenswrapper[4687]: W0314 09:00:58.723187 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f679b4_c725_4c83_9248_e1a292d851bf.slice/crio-548af851d665e364485c5eb898f77d2c20c1894dc8cbebd6132366353c2a596b WatchSource:0}: Error finding container 548af851d665e364485c5eb898f77d2c20c1894dc8cbebd6132366353c2a596b: Status 404 returned error can't find the container with id 548af851d665e364485c5eb898f77d2c20c1894dc8cbebd6132366353c2a596b Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.743543 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:00:58 crc kubenswrapper[4687]: W0314 09:00:58.756859 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad6f885_26fe_42ec_9cd8_3578b5ce5574.slice/crio-b662e05dccda23fc72526ac8967e91f8b8fd2d6b943efd299d3590dae10c7946 WatchSource:0}: Error finding container b662e05dccda23fc72526ac8967e91f8b8fd2d6b943efd299d3590dae10c7946: Status 404 returned error can't find the container with id b662e05dccda23fc72526ac8967e91f8b8fd2d6b943efd299d3590dae10c7946 Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.819901 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-7578x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.820256 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7578x" podUID="6b74e63b-7771-4f32-9fca-0e112597d97e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.819908 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-7578x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 09:00:58 crc kubenswrapper[4687]: I0314 09:00:58.820418 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7578x" podUID="6b74e63b-7771-4f32-9fca-0e112597d97e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.046273 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerID="afa4426fbd1e9694c2dae251b3aa68a30546ad7f689fd2d2a0c575fed16dd4aa" exitCode=0 Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.046373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerDied","Data":"afa4426fbd1e9694c2dae251b3aa68a30546ad7f689fd2d2a0c575fed16dd4aa"} Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.049605 4687 generic.go:334] "Generic (PLEG): container finished" podID="89f679b4-c725-4c83-9248-e1a292d851bf" containerID="0c0150c1f9716f5336eeeee6a581decd40abc057b5e9b1bf425f950d6e747418" exitCode=0 Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.049642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerDied","Data":"0c0150c1f9716f5336eeeee6a581decd40abc057b5e9b1bf425f950d6e747418"} Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.049659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerStarted","Data":"548af851d665e364485c5eb898f77d2c20c1894dc8cbebd6132366353c2a596b"} Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.052799 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerStarted","Data":"b662e05dccda23fc72526ac8967e91f8b8fd2d6b943efd299d3590dae10c7946"} Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.053847 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.057437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bbmr4" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.158601 4687 ???:1] "http: TLS handshake error from 192.168.126.11:58100: no serving certificate available for the kubelet" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.186901 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.198535 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.198625 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.201440 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.201685 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.270355 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.338840 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.338884 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.340299 4687 ???:1] "http: TLS handshake error from 192.168.126.11:58106: no serving certificate available for the kubelet" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.440041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.440092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.440173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.482952 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.519797 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.524887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.528025 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:00:59 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:00:59 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:00:59 crc kubenswrapper[4687]: healthz check failed Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.528135 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:00:59 crc kubenswrapper[4687]: I0314 09:00:59.879147 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:00:59 crc kubenswrapper[4687]: W0314 09:00:59.956383 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf2564578_f793_4dd8_879f_5e60251d43cb.slice/crio-c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585 WatchSource:0}: Error finding container c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585: Status 404 returned error can't find the container with id c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585 Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.064831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2564578-f793-4dd8-879f-5e60251d43cb","Type":"ContainerStarted","Data":"c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585"} Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.068965 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerID="9dedded0c3cb308404927e5913300a80355274ec8e7b5e6e2385791cfae37c4a" exitCode=0 Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.069900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerDied","Data":"9dedded0c3cb308404927e5913300a80355274ec8e7b5e6e2385791cfae37c4a"} Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.077205 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.078139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.079709 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.080307 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.091040 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.181142 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.181244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.286918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.287004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.288885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.331860 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.434655 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.523268 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:00 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:00 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:00 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:00 crc kubenswrapper[4687]: I0314 09:01:00.523813 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.076351 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2564578-f793-4dd8-879f-5e60251d43cb","Type":"ContainerStarted","Data":"9d5004ce8ee6ea4887eda10b1719ad5df0defaa17dd942cf7aba750de5e70902"} Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.100052 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.100033328 podStartE2EDuration="2.100033328s" podCreationTimestamp="2026-03-14 09:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:01.093829315 +0000 UTC m=+246.082069690" watchObservedRunningTime="2026-03-14 09:01:01.100033328 +0000 UTC m=+246.088273703" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.519757 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:01 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:01 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:01 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.519820 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.622083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.622160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.624487 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.624530 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.633866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.641908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.723885 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.723944 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.726622 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.737074 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.766681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.770172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.868853 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.877818 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:01 crc kubenswrapper[4687]: I0314 09:01:01.905227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:02 crc kubenswrapper[4687]: I0314 09:01:02.087230 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2564578-f793-4dd8-879f-5e60251d43cb" containerID="9d5004ce8ee6ea4887eda10b1719ad5df0defaa17dd942cf7aba750de5e70902" exitCode=0 Mar 14 09:01:02 crc kubenswrapper[4687]: I0314 09:01:02.087278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2564578-f793-4dd8-879f-5e60251d43cb","Type":"ContainerDied","Data":"9d5004ce8ee6ea4887eda10b1719ad5df0defaa17dd942cf7aba750de5e70902"} Mar 14 09:01:02 crc kubenswrapper[4687]: I0314 09:01:02.529477 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:02 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:02 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:02 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:02 crc kubenswrapper[4687]: I0314 09:01:02.529558 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.162170 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.163865 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.181498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4aae76c5-5354-43fd-8771-0114216bbf40-metrics-certs\") pod \"network-metrics-daemon-2xptn\" (UID: \"4aae76c5-5354-43fd-8771-0114216bbf40\") " pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.365133 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.372053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xptn" Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.525312 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:03 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:03 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:03 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:03 crc kubenswrapper[4687]: I0314 09:01:03.525398 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:04 crc kubenswrapper[4687]: I0314 09:01:04.355767 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-72dmh" Mar 14 09:01:04 crc kubenswrapper[4687]: I0314 09:01:04.520933 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:04 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:04 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:04 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:04 crc kubenswrapper[4687]: I0314 09:01:04.521012 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:05 crc kubenswrapper[4687]: I0314 09:01:05.519058 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:05 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:05 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:05 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:05 crc kubenswrapper[4687]: I0314 09:01:05.519186 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:06 crc kubenswrapper[4687]: I0314 09:01:06.521991 4687 patch_prober.go:28] interesting pod/router-default-5444994796-t7bl7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:06 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 14 09:01:06 crc kubenswrapper[4687]: [+]process-running ok Mar 14 09:01:06 crc kubenswrapper[4687]: healthz check failed Mar 14 09:01:06 crc kubenswrapper[4687]: I0314 09:01:06.522111 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t7bl7" podUID="6d13a0e5-c5cc-4f39-9d07-ba4986f118e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:07 crc kubenswrapper[4687]: I0314 09:01:07.519183 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:01:07 crc kubenswrapper[4687]: I0314 09:01:07.521592 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t7bl7" Mar 14 09:01:08 crc kubenswrapper[4687]: I0314 09:01:08.514720 4687 patch_prober.go:28] interesting pod/console-f9d7485db-4bm6l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 14 09:01:08 crc kubenswrapper[4687]: I0314 09:01:08.514774 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bm6l" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 14 09:01:08 crc kubenswrapper[4687]: I0314 09:01:08.825018 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7578x" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.069996 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.137015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2564578-f793-4dd8-879f-5e60251d43cb","Type":"ContainerDied","Data":"c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585"} Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.137078 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c658751bba1d9540d1b03b3d8afbce614567fe533671fd26df44cf4964c3a585" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.137156 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.153622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir\") pod \"f2564578-f793-4dd8-879f-5e60251d43cb\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.153698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access\") pod \"f2564578-f793-4dd8-879f-5e60251d43cb\" (UID: \"f2564578-f793-4dd8-879f-5e60251d43cb\") " Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.153785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2564578-f793-4dd8-879f-5e60251d43cb" (UID: "f2564578-f793-4dd8-879f-5e60251d43cb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.153937 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2564578-f793-4dd8-879f-5e60251d43cb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.158901 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2564578-f793-4dd8-879f-5e60251d43cb" (UID: "f2564578-f793-4dd8-879f-5e60251d43cb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.256429 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2564578-f793-4dd8-879f-5e60251d43cb-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:09 crc kubenswrapper[4687]: I0314 09:01:09.422094 4687 ???:1] "http: TLS handshake error from 192.168.126.11:45422: no serving certificate available for the kubelet" Mar 14 09:01:13 crc kubenswrapper[4687]: I0314 09:01:13.286601 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:01:13 crc kubenswrapper[4687]: I0314 09:01:13.287277 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerName="controller-manager" containerID="cri-o://e17be05b76c0b9665e659f8106c03e25e8f5a6b25d11026fa8a5cfe427e0670d" gracePeriod=30 Mar 14 09:01:13 crc kubenswrapper[4687]: I0314 09:01:13.306550 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:01:13 crc kubenswrapper[4687]: I0314 09:01:13.306786 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" podUID="945af372-2e90-40f4-80e9-be605ed6938a" containerName="route-controller-manager" containerID="cri-o://b36b66bfe1611d1956b45e0a74f52e3a363f86f74285284dffa8cab01868f773" gracePeriod=30 Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.189135 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerID="e17be05b76c0b9665e659f8106c03e25e8f5a6b25d11026fa8a5cfe427e0670d" exitCode=0 Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.189197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" event={"ID":"4ded01b8-d9df-49e5-91ad-da6b5f1fb358","Type":"ContainerDied","Data":"e17be05b76c0b9665e659f8106c03e25e8f5a6b25d11026fa8a5cfe427e0670d"} Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.191771 4687 generic.go:334] "Generic (PLEG): container finished" podID="945af372-2e90-40f4-80e9-be605ed6938a" containerID="b36b66bfe1611d1956b45e0a74f52e3a363f86f74285284dffa8cab01868f773" exitCode=0 Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.191825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" event={"ID":"945af372-2e90-40f4-80e9-be605ed6938a","Type":"ContainerDied","Data":"b36b66bfe1611d1956b45e0a74f52e3a363f86f74285284dffa8cab01868f773"} Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.615202 4687 patch_prober.go:28] interesting pod/controller-manager-756cfdb766-69nkf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 14 09:01:15 crc kubenswrapper[4687]: I0314 09:01:15.615266 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 14 09:01:16 crc kubenswrapper[4687]: I0314 09:01:16.169360 4687 patch_prober.go:28] interesting pod/route-controller-manager-78dccf48f4-n8s22 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 14 09:01:16 crc kubenswrapper[4687]: I0314 09:01:16.169675 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" podUID="945af372-2e90-40f4-80e9-be605ed6938a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 14 09:01:16 crc kubenswrapper[4687]: I0314 09:01:16.392361 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:01:18 crc kubenswrapper[4687]: I0314 09:01:18.518932 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:01:18 crc kubenswrapper[4687]: I0314 09:01:18.522109 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.735317 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.739181 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764377 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:22 crc kubenswrapper[4687]: E0314 09:01:22.764595 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerName="controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764607 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerName="controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: E0314 09:01:22.764615 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2564578-f793-4dd8-879f-5e60251d43cb" containerName="pruner" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764622 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2564578-f793-4dd8-879f-5e60251d43cb" containerName="pruner" Mar 14 09:01:22 crc kubenswrapper[4687]: E0314 09:01:22.764632 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945af372-2e90-40f4-80e9-be605ed6938a" containerName="route-controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764638 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="945af372-2e90-40f4-80e9-be605ed6938a" containerName="route-controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764735 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="945af372-2e90-40f4-80e9-be605ed6938a" containerName="route-controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764750 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" containerName="controller-manager" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.764760 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2564578-f793-4dd8-879f-5e60251d43cb" containerName="pruner" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.765139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.795177 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.821865 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert\") pod \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.822224 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles\") pod \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.822473 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert\") pod \"945af372-2e90-40f4-80e9-be605ed6938a\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823017 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmbfs\" (UniqueName: \"kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs\") pod \"945af372-2e90-40f4-80e9-be605ed6938a\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823177 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca\") pod \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config\") pod \"945af372-2e90-40f4-80e9-be605ed6938a\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config\") pod \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmktq\" (UniqueName: \"kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq\") pod \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\" (UID: \"4ded01b8-d9df-49e5-91ad-da6b5f1fb358\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823715 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca\") pod \"945af372-2e90-40f4-80e9-be605ed6938a\" (UID: \"945af372-2e90-40f4-80e9-be605ed6938a\") " Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.823985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552mc\" (UniqueName: \"kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.824109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.824254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.824704 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.824091 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ded01b8-d9df-49e5-91ad-da6b5f1fb358" (UID: "4ded01b8-d9df-49e5-91ad-da6b5f1fb358"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.824703 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config" (OuterVolumeSpecName: "config") pod "4ded01b8-d9df-49e5-91ad-da6b5f1fb358" (UID: "4ded01b8-d9df-49e5-91ad-da6b5f1fb358"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.825837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ded01b8-d9df-49e5-91ad-da6b5f1fb358" (UID: "4ded01b8-d9df-49e5-91ad-da6b5f1fb358"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.826190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca" (OuterVolumeSpecName: "client-ca") pod "945af372-2e90-40f4-80e9-be605ed6938a" (UID: "945af372-2e90-40f4-80e9-be605ed6938a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.826703 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config" (OuterVolumeSpecName: "config") pod "945af372-2e90-40f4-80e9-be605ed6938a" (UID: "945af372-2e90-40f4-80e9-be605ed6938a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.828540 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ded01b8-d9df-49e5-91ad-da6b5f1fb358" (UID: "4ded01b8-d9df-49e5-91ad-da6b5f1fb358"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.828556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs" (OuterVolumeSpecName: "kube-api-access-lmbfs") pod "945af372-2e90-40f4-80e9-be605ed6938a" (UID: "945af372-2e90-40f4-80e9-be605ed6938a"). InnerVolumeSpecName "kube-api-access-lmbfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.829014 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq" (OuterVolumeSpecName: "kube-api-access-kmktq") pod "4ded01b8-d9df-49e5-91ad-da6b5f1fb358" (UID: "4ded01b8-d9df-49e5-91ad-da6b5f1fb358"). InnerVolumeSpecName "kube-api-access-kmktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.830634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "945af372-2e90-40f4-80e9-be605ed6938a" (UID: "945af372-2e90-40f4-80e9-be605ed6938a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.925872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-552mc\" (UniqueName: \"kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.925937 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.925963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926002 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926037 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926049 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmktq\" (UniqueName: \"kubernetes.io/projected/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-kube-api-access-kmktq\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926059 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926070 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926078 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926086 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945af372-2e90-40f4-80e9-be605ed6938a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926094 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmbfs\" (UniqueName: \"kubernetes.io/projected/945af372-2e90-40f4-80e9-be605ed6938a-kube-api-access-lmbfs\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926102 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945af372-2e90-40f4-80e9-be605ed6938a-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.926110 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ded01b8-d9df-49e5-91ad-da6b5f1fb358-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.927356 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.927571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.929098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:22 crc kubenswrapper[4687]: I0314 09:01:22.941468 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-552mc\" (UniqueName: \"kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc\") pod \"route-controller-manager-6c798655b6-vqktb\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.096473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.229715 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.229714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22" event={"ID":"945af372-2e90-40f4-80e9-be605ed6938a","Type":"ContainerDied","Data":"f5d63376a2c53a88aecab8a5524339b2e7c62e06079a0c44d2f5c70227adca47"} Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.229773 4687 scope.go:117] "RemoveContainer" containerID="b36b66bfe1611d1956b45e0a74f52e3a363f86f74285284dffa8cab01868f773" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.231231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" event={"ID":"4ded01b8-d9df-49e5-91ad-da6b5f1fb358","Type":"ContainerDied","Data":"ca7fd2b89ec1c90814fcfe31471b2a9ad7870abaa755c04464cd52d818ef043f"} Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.231280 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756cfdb766-69nkf" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.263041 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.268267 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78dccf48f4-n8s22"] Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.281159 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.283177 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-756cfdb766-69nkf"] Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.743459 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ded01b8-d9df-49e5-91ad-da6b5f1fb358" path="/var/lib/kubelet/pods/4ded01b8-d9df-49e5-91ad-da6b5f1fb358/volumes" Mar 14 09:01:23 crc kubenswrapper[4687]: I0314 09:01:23.744207 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945af372-2e90-40f4-80e9-be605ed6938a" path="/var/lib/kubelet/pods/945af372-2e90-40f4-80e9-be605ed6938a/volumes" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.111869 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.111937 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.835421 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.836556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.838788 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.839546 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.843234 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.844133 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.844317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.846659 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.847063 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.847353 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.854103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.854166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.854219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zkvh\" (UniqueName: \"kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.854238 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.854287 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.955529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.955650 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.955683 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.955726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zkvh\" (UniqueName: \"kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.955754 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.956630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.956892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.956981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.962284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:24 crc kubenswrapper[4687]: I0314 09:01:24.984242 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zkvh\" (UniqueName: \"kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh\") pod \"controller-manager-f44b94bfc-dc9rr\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:25 crc kubenswrapper[4687]: I0314 09:01:25.165052 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:27 crc kubenswrapper[4687]: E0314 09:01:27.459611 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 14 09:01:27 crc kubenswrapper[4687]: E0314 09:01:27.460259 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 09:01:27 crc kubenswrapper[4687]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 14 09:01:27 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhqnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29557980-mm8hd_openshift-infra(17f93f38-eae8-494e-b879-4240e2712982): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 14 09:01:27 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 14 09:01:27 crc kubenswrapper[4687]: E0314 09:01:27.462198 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" podUID="17f93f38-eae8-494e-b879-4240e2712982" Mar 14 09:01:28 crc kubenswrapper[4687]: E0314 09:01:28.255631 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" podUID="17f93f38-eae8-494e-b879-4240e2712982" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.306028 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vmpw5" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.378657 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.380428 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.383311 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.383543 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.383649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.510887 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.510958 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.611588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.611667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.611716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.631099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:29 crc kubenswrapper[4687]: I0314 09:01:29.715914 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:33 crc kubenswrapper[4687]: I0314 09:01:33.250071 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:33 crc kubenswrapper[4687]: I0314 09:01:33.349409 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:34 crc kubenswrapper[4687]: E0314 09:01:34.036148 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 09:01:34 crc kubenswrapper[4687]: E0314 09:01:34.036353 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j8w67_openshift-marketplace(1cbd8ddb-1c88-4838-bce4-982b8c78ab4f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:34 crc kubenswrapper[4687]: E0314 09:01:34.037536 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j8w67" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.176994 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.178185 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.184798 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.368824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.369042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.369101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.470808 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.470915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.470937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.470970 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.471001 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.490941 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access\") pod \"installer-9-crc\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:34 crc kubenswrapper[4687]: I0314 09:01:34.508783 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:35 crc kubenswrapper[4687]: E0314 09:01:35.886488 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 09:01:35 crc kubenswrapper[4687]: E0314 09:01:35.886945 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7wm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x58t9_openshift-marketplace(764f93b4-9c3b-400d-b508-4534689e51a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:35 crc kubenswrapper[4687]: E0314 09:01:35.888322 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x58t9" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" Mar 14 09:01:36 crc kubenswrapper[4687]: I0314 09:01:36.159687 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xptn"] Mar 14 09:01:37 crc kubenswrapper[4687]: I0314 09:01:37.810927 4687 scope.go:117] "RemoveContainer" containerID="e17be05b76c0b9665e659f8106c03e25e8f5a6b25d11026fa8a5cfe427e0670d" Mar 14 09:01:37 crc kubenswrapper[4687]: E0314 09:01:37.811435 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j8w67" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" Mar 14 09:01:37 crc kubenswrapper[4687]: E0314 09:01:37.811686 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x58t9" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" Mar 14 09:01:37 crc kubenswrapper[4687]: W0314 09:01:37.817927 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-71cdb883c44d473d48422fbfc56c986926f2bf83b9631471b04504d9242fbf9f WatchSource:0}: Error finding container 71cdb883c44d473d48422fbfc56c986926f2bf83b9631471b04504d9242fbf9f: Status 404 returned error can't find the container with id 71cdb883c44d473d48422fbfc56c986926f2bf83b9631471b04504d9242fbf9f Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.079331 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.086307 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:01:38 crc kubenswrapper[4687]: E0314 09:01:38.093854 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 09:01:38 crc kubenswrapper[4687]: E0314 09:01:38.094283 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tngrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-chn6l_openshift-marketplace(b6b25e7f-bec3-4142-a347-886777f6a1c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:38 crc kubenswrapper[4687]: E0314 09:01:38.095740 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-chn6l" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" Mar 14 09:01:38 crc kubenswrapper[4687]: W0314 09:01:38.095795 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8951d614_e1c5_4761_8043_fff69c9005bf.slice/crio-910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58 WatchSource:0}: Error finding container 910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58: Status 404 returned error can't find the container with id 910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58 Mar 14 09:01:38 crc kubenswrapper[4687]: W0314 09:01:38.097753 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a93ee5a_6b92_4e18_9e33_b4aa167d58e3.slice/crio-cdd0049cb7a51c80c51cb58897adf6abc529944aaee4df09a47a630fd920977a WatchSource:0}: Error finding container cdd0049cb7a51c80c51cb58897adf6abc529944aaee4df09a47a630fd920977a: Status 404 returned error can't find the container with id cdd0049cb7a51c80c51cb58897adf6abc529944aaee4df09a47a630fd920977a Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.215377 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:38 crc kubenswrapper[4687]: W0314 09:01:38.225441 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbd5d0c28_21ff_49ea_b93c_f9e4c9eee6ab.slice/crio-2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b WatchSource:0}: Error finding container 2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b: Status 404 returned error can't find the container with id 2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.305464 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" event={"ID":"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3","Type":"ContainerStarted","Data":"cdd0049cb7a51c80c51cb58897adf6abc529944aaee4df09a47a630fd920977a"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.307127 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5850f5b15f2b0299599245c392f4e928cc02e73fdcdde7d484ad81ec5e31a6a8"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.307152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a38851a171f287b4491328b2393c73ee0177f5a10dc46cf1aa67b3bed3220bab"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.308405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d00ddbe8aff1bf7aafd173f8060395751e648cdf751c9a24b311a54058479c63"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.308428 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"500d4404f912302076f637d268279d2546eb00eae516c64680864c656a8028d8"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.312087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab","Type":"ContainerStarted","Data":"2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.313120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xptn" event={"ID":"4aae76c5-5354-43fd-8771-0114216bbf40","Type":"ContainerStarted","Data":"186eb07c4c9d1129ec5882202a56b8f839d785ba4fad79ef97074022a8f8be8e"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.314056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8951d614-e1c5-4761-8043-fff69c9005bf","Type":"ContainerStarted","Data":"910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.315280 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abcd1e508dfc6ba66034a6e9bef48fbddd1c96daf747a3d1467b766b4a08de83"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.315301 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"71cdb883c44d473d48422fbfc56c986926f2bf83b9631471b04504d9242fbf9f"} Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.315576 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:38 crc kubenswrapper[4687]: E0314 09:01:38.318685 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-chn6l" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.374713 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:38 crc kubenswrapper[4687]: I0314 09:01:38.377478 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:01:38 crc kubenswrapper[4687]: W0314 09:01:38.391702 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b81ff7a_14e2_4730_bdf7_3ddd6935c3ad.slice/crio-f617d84b8e59e1bd29530d1034050edb49b16d381a38948e00863fb065205373 WatchSource:0}: Error finding container f617d84b8e59e1bd29530d1034050edb49b16d381a38948e00863fb065205373: Status 404 returned error can't find the container with id f617d84b8e59e1bd29530d1034050edb49b16d381a38948e00863fb065205373 Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.324736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" event={"ID":"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3","Type":"ContainerStarted","Data":"5b2f21d17050283804275bbe46c834c5f43e94d3b8593be78e30c009b5f74b51"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.325193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.324986 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" containerID="cri-o://5b2f21d17050283804275bbe46c834c5f43e94d3b8593be78e30c009b5f74b51" gracePeriod=30 Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.326365 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab","Type":"ContainerStarted","Data":"b633745b35f49e7e6a77f3a0beddb786669ae6be8893d58d102006f9561b69ce"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.328256 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8951d614-e1c5-4761-8043-fff69c9005bf","Type":"ContainerStarted","Data":"e937e4c0714df75d74fc8510367858b1d6a4d30f41e8eb35016156454d1883e2"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.329862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xptn" event={"ID":"4aae76c5-5354-43fd-8771-0114216bbf40","Type":"ContainerStarted","Data":"10bdecca565d456b5609b04655294c7bb6ff283421ee76f885013fcb0a99561e"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.331023 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"aeb28798-769f-4a7a-8da8-a5213458d060","Type":"ContainerStarted","Data":"719d56cd6a31d2cfc089557383c95f59df2910ef9684395f33ede4928b7a6945"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.331056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"aeb28798-769f-4a7a-8da8-a5213458d060","Type":"ContainerStarted","Data":"e7785c7b2aef00783b7f4b87350e0e8b7e33aacd9e7c86eccf681bf0b6bbc6c3"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.333318 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" event={"ID":"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad","Type":"ContainerStarted","Data":"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.333394 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" event={"ID":"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad","Type":"ContainerStarted","Data":"f617d84b8e59e1bd29530d1034050edb49b16d381a38948e00863fb065205373"} Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.333526 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" podUID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" containerName="controller-manager" containerID="cri-o://167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4" gracePeriod=30 Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.340767 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" podStartSLOduration=26.340754106 podStartE2EDuration="26.340754106s" podCreationTimestamp="2026-03-14 09:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:39.338995763 +0000 UTC m=+284.327236138" watchObservedRunningTime="2026-03-14 09:01:39.340754106 +0000 UTC m=+284.328994481" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.377879 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=39.377865463 podStartE2EDuration="39.377865463s" podCreationTimestamp="2026-03-14 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:39.374773036 +0000 UTC m=+284.363013411" watchObservedRunningTime="2026-03-14 09:01:39.377865463 +0000 UTC m=+284.366105838" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.421731 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.421712163 podStartE2EDuration="10.421712163s" podCreationTimestamp="2026-03-14 09:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:39.419170541 +0000 UTC m=+284.407410916" watchObservedRunningTime="2026-03-14 09:01:39.421712163 +0000 UTC m=+284.409952548" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.454989 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" podStartSLOduration=26.454967555 podStartE2EDuration="26.454967555s" podCreationTimestamp="2026-03-14 09:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:39.452903894 +0000 UTC m=+284.441144269" watchObservedRunningTime="2026-03-14 09:01:39.454967555 +0000 UTC m=+284.443207930" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.473893 4687 patch_prober.go:28] interesting pod/route-controller-manager-6c798655b6-vqktb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:33182->10.217.0.58:8443: read: connection reset by peer" start-of-body= Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.473964 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:33182->10.217.0.58:8443: read: connection reset by peer" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.475263 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.47524943 podStartE2EDuration="5.47524943s" podCreationTimestamp="2026-03-14 09:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:39.473413926 +0000 UTC m=+284.461654301" watchObservedRunningTime="2026-03-14 09:01:39.47524943 +0000 UTC m=+284.463489795" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.735803 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.750609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca\") pod \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.750670 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config\") pod \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.750733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zkvh\" (UniqueName: \"kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh\") pod \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.750766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles\") pod \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.750818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert\") pod \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\" (UID: \"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad\") " Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.752361 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" (UID: "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.752407 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config" (OuterVolumeSpecName: "config") pod "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" (UID: "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.752371 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" (UID: "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.758451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" (UID: "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.759153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh" (OuterVolumeSpecName: "kube-api-access-6zkvh") pod "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" (UID: "9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad"). InnerVolumeSpecName "kube-api-access-6zkvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.760774 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:01:39 crc kubenswrapper[4687]: E0314 09:01:39.761026 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" containerName="controller-manager" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.761042 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" containerName="controller-manager" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.761138 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" containerName="controller-manager" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.762028 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.767212 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp7h\" (UniqueName: \"kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852859 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852875 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852884 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zkvh\" (UniqueName: \"kubernetes.io/projected/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-kube-api-access-6zkvh\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852893 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.852902 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:39 crc kubenswrapper[4687]: E0314 09:01:39.910054 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 09:01:39 crc kubenswrapper[4687]: E0314 09:01:39.910230 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5pbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p62jj_openshift-marketplace(d3d7d663-f8a3-477b-9487-3e284e3cdf6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:39 crc kubenswrapper[4687]: E0314 09:01:39.911376 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p62jj" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.954085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.954127 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.954149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.954168 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.954209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fp7h\" (UniqueName: \"kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.955402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.955559 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.956671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.959026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:39 crc kubenswrapper[4687]: I0314 09:01:39.971102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fp7h\" (UniqueName: \"kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h\") pod \"controller-manager-78cf64c84b-59bwb\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.083573 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.293227 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.293634 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8f9h8_openshift-marketplace(6f9f2a8a-59b8-4803-976f-d23c1d6de630): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.294773 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8f9h8" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.345162 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" containerID="167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4" exitCode=0 Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.345229 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" event={"ID":"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad","Type":"ContainerDied","Data":"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.345237 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.345255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f44b94bfc-dc9rr" event={"ID":"9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad","Type":"ContainerDied","Data":"f617d84b8e59e1bd29530d1034050edb49b16d381a38948e00863fb065205373"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.345273 4687 scope.go:117] "RemoveContainer" containerID="167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.347298 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c798655b6-vqktb_1a93ee5a-6b92-4e18-9e33-b4aa167d58e3/route-controller-manager/0.log" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.347352 4687 generic.go:334] "Generic (PLEG): container finished" podID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerID="5b2f21d17050283804275bbe46c834c5f43e94d3b8593be78e30c009b5f74b51" exitCode=255 Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.347436 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" event={"ID":"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3","Type":"ContainerDied","Data":"5b2f21d17050283804275bbe46c834c5f43e94d3b8593be78e30c009b5f74b51"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.350255 4687 generic.go:334] "Generic (PLEG): container finished" podID="bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" containerID="b633745b35f49e7e6a77f3a0beddb786669ae6be8893d58d102006f9561b69ce" exitCode=0 Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.350368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab","Type":"ContainerDied","Data":"b633745b35f49e7e6a77f3a0beddb786669ae6be8893d58d102006f9561b69ce"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.354733 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xptn" event={"ID":"4aae76c5-5354-43fd-8771-0114216bbf40","Type":"ContainerStarted","Data":"2586fbb6638cea9a7ac8e4ced333e100ffc78b472a33162b7b48a9e443eca8f0"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.356153 4687 generic.go:334] "Generic (PLEG): container finished" podID="8951d614-e1c5-4761-8043-fff69c9005bf" containerID="e937e4c0714df75d74fc8510367858b1d6a4d30f41e8eb35016156454d1883e2" exitCode=0 Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.356934 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8951d614-e1c5-4761-8043-fff69c9005bf","Type":"ContainerDied","Data":"e937e4c0714df75d74fc8510367858b1d6a4d30f41e8eb35016156454d1883e2"} Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.395807 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2xptn" podStartSLOduration=213.39578767 podStartE2EDuration="3m33.39578767s" podCreationTimestamp="2026-03-14 08:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:40.394681282 +0000 UTC m=+285.382921647" watchObservedRunningTime="2026-03-14 09:01:40.39578767 +0000 UTC m=+285.384028045" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.440924 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.441088 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9nqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sgxm6_openshift-marketplace(a8b47421-912e-4faa-b3ed-33881459d76e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:40 crc kubenswrapper[4687]: E0314 09:01:40.442402 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sgxm6" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.464970 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:40 crc kubenswrapper[4687]: I0314 09:01:40.467412 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f44b94bfc-dc9rr"] Mar 14 09:01:41 crc kubenswrapper[4687]: I0314 09:01:41.744365 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad" path="/var/lib/kubelet/pods/9b81ff7a-14e2-4730-bdf7-3ddd6935c3ad/volumes" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.421299 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p62jj" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.421917 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sgxm6" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.421974 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8f9h8" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.431628 4687 scope.go:117] "RemoveContainer" containerID="167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.431971 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4\": container with ID starting with 167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4 not found: ID does not exist" containerID="167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.431997 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4"} err="failed to get container status \"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4\": rpc error: code = NotFound desc = could not find container \"167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4\": container with ID starting with 167403abed0315d67995348729d3b6aaa7c03a35a25ceaad7438ccfa904589f4 not found: ID does not exist" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.496637 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c798655b6-vqktb_1a93ee5a-6b92-4e18-9e33-b4aa167d58e3/route-controller-manager/0.log" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.496906 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.503063 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.530398 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.551385 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.551741 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.551760 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.551772 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8951d614-e1c5-4761-8043-fff69c9005bf" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.551783 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8951d614-e1c5-4761-8043-fff69c9005bf" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: E0314 09:01:43.551814 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.551826 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.552048 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.552083 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8951d614-e1c5-4761-8043-fff69c9005bf" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.552098 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.552814 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.556421 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601009 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca\") pod \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config\") pod \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-552mc\" (UniqueName: \"kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc\") pod \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert\") pod \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\" (UID: \"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601226 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir\") pod \"8951d614-e1c5-4761-8043-fff69c9005bf\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601287 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access\") pod \"8951d614-e1c5-4761-8043-fff69c9005bf\" (UID: \"8951d614-e1c5-4761-8043-fff69c9005bf\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access\") pod \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir\") pod \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\" (UID: \"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab\") " Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mww\" (UniqueName: \"kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.601998 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" (UID: "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.602060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config" (OuterVolumeSpecName: "config") pod "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" (UID: "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.602105 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" (UID: "bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.602144 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8951d614-e1c5-4761-8043-fff69c9005bf" (UID: "8951d614-e1c5-4761-8043-fff69c9005bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.606842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc" (OuterVolumeSpecName: "kube-api-access-552mc") pod "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" (UID: "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3"). InnerVolumeSpecName "kube-api-access-552mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.606890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab" (UID: "bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.606924 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" (UID: "1a93ee5a-6b92-4e18-9e33-b4aa167d58e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.608023 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8951d614-e1c5-4761-8043-fff69c9005bf" (UID: "8951d614-e1c5-4761-8043-fff69c9005bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.665380 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.702644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.704462 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mww\" (UniqueName: \"kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.704491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.705469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.705655 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706505 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706523 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8951d614-e1c5-4761-8043-fff69c9005bf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706537 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8951d614-e1c5-4761-8043-fff69c9005bf-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706574 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706587 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706598 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706608 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.706620 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-552mc\" (UniqueName: \"kubernetes.io/projected/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3-kube-api-access-552mc\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.707203 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.710298 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.721380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mww\" (UniqueName: \"kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww\") pod \"route-controller-manager-5d84cbd8f8-d5hv2\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:43 crc kubenswrapper[4687]: I0314 09:01:43.887512 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.096999 4687 patch_prober.go:28] interesting pod/route-controller-manager-6c798655b6-vqktb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.097506 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.377544 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" event={"ID":"a253c1c3-19cb-4951-9790-d284412e93c2","Type":"ContainerStarted","Data":"84e6883bd9d171201c84c4b56161cced1f710f24b328ff8e8e41a91101c79669"} Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.380407 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c798655b6-vqktb_1a93ee5a-6b92-4e18-9e33-b4aa167d58e3/route-controller-manager/0.log" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.380570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" event={"ID":"1a93ee5a-6b92-4e18-9e33-b4aa167d58e3","Type":"ContainerDied","Data":"cdd0049cb7a51c80c51cb58897adf6abc529944aaee4df09a47a630fd920977a"} Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.380691 4687 scope.go:117] "RemoveContainer" containerID="5b2f21d17050283804275bbe46c834c5f43e94d3b8593be78e30c009b5f74b51" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.380912 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.385566 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.385579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bd5d0c28-21ff-49ea-b93c-f9e4c9eee6ab","Type":"ContainerDied","Data":"2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b"} Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.385885 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f34864e3a7b31113c3cf3486c6aed0e6d1958f27ef7723dff4f80fb1107149b" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.387810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8951d614-e1c5-4761-8043-fff69c9005bf","Type":"ContainerDied","Data":"910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58"} Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.387954 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="910f1e817a10290c41fe2c785125af0662566108593c26b7329fdc1df2b22c58" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.387910 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.409034 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.412612 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c798655b6-vqktb"] Mar 14 09:01:44 crc kubenswrapper[4687]: I0314 09:01:44.866934 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:01:44 crc kubenswrapper[4687]: W0314 09:01:44.878870 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8d9969d_4881_43b0_a8d6_790e31ae3ea6.slice/crio-c3ef17c5d62ded68b2fb31f71edbf5c791dacf2a727a7077661486f0bedaf40d WatchSource:0}: Error finding container c3ef17c5d62ded68b2fb31f71edbf5c791dacf2a727a7077661486f0bedaf40d: Status 404 returned error can't find the container with id c3ef17c5d62ded68b2fb31f71edbf5c791dacf2a727a7077661486f0bedaf40d Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.397614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerStarted","Data":"c3f003e916e3ed65918fc08c52012a6e17c85bc539fe7d05b2299b55110ec9e8"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.399407 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerStarted","Data":"e848835a1949557d7b7b5bdd067dc11e820acb51e9fa0f86a6564ce4539a7b61"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.400777 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" event={"ID":"17f93f38-eae8-494e-b879-4240e2712982","Type":"ContainerStarted","Data":"db43786bd30daaa83a89bb2fc5fdd3f1f89abb876dd5360c985b47a79dcbfd49"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.402480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" event={"ID":"f8d9969d-4881-43b0-a8d6-790e31ae3ea6","Type":"ContainerStarted","Data":"730f26fa7a7b9e17c7280f1d2ef0c3f9c95a296889201169efe771de8bac2c05"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.402515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" event={"ID":"f8d9969d-4881-43b0-a8d6-790e31ae3ea6","Type":"ContainerStarted","Data":"c3ef17c5d62ded68b2fb31f71edbf5c791dacf2a727a7077661486f0bedaf40d"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.403310 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.405375 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" event={"ID":"a253c1c3-19cb-4951-9790-d284412e93c2","Type":"ContainerStarted","Data":"db66910abda508e8e4c281189397481072fd0627180b883c2ab1e3353dd4fbcd"} Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.405629 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.416690 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.434043 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" podStartSLOduration=12.43402585 podStartE2EDuration="12.43402585s" podCreationTimestamp="2026-03-14 09:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:45.431804105 +0000 UTC m=+290.420044470" watchObservedRunningTime="2026-03-14 09:01:45.43402585 +0000 UTC m=+290.422266225" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.455472 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" podStartSLOduration=52.498683681 podStartE2EDuration="1m45.455453693s" podCreationTimestamp="2026-03-14 09:00:00 +0000 UTC" firstStartedPulling="2026-03-14 09:00:51.58624979 +0000 UTC m=+236.574490175" lastFinishedPulling="2026-03-14 09:01:44.543019802 +0000 UTC m=+289.531260187" observedRunningTime="2026-03-14 09:01:45.454400177 +0000 UTC m=+290.442640552" watchObservedRunningTime="2026-03-14 09:01:45.455453693 +0000 UTC m=+290.443694058" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.498293 4687 csr.go:261] certificate signing request csr-9pjcf is approved, waiting to be issued Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.503559 4687 csr.go:257] certificate signing request csr-9pjcf is issued Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.511979 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" podStartSLOduration=12.511960403 podStartE2EDuration="12.511960403s" podCreationTimestamp="2026-03-14 09:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:45.511389799 +0000 UTC m=+290.499630174" watchObservedRunningTime="2026-03-14 09:01:45.511960403 +0000 UTC m=+290.500200778" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.532019 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:01:45 crc kubenswrapper[4687]: I0314 09:01:45.745581 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a93ee5a-6b92-4e18-9e33-b4aa167d58e3" path="/var/lib/kubelet/pods/1a93ee5a-6b92-4e18-9e33-b4aa167d58e3/volumes" Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.413265 4687 generic.go:334] "Generic (PLEG): container finished" podID="89f679b4-c725-4c83-9248-e1a292d851bf" containerID="c3f003e916e3ed65918fc08c52012a6e17c85bc539fe7d05b2299b55110ec9e8" exitCode=0 Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.413316 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerDied","Data":"c3f003e916e3ed65918fc08c52012a6e17c85bc539fe7d05b2299b55110ec9e8"} Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.415618 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerID="e848835a1949557d7b7b5bdd067dc11e820acb51e9fa0f86a6564ce4539a7b61" exitCode=0 Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.415790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerDied","Data":"e848835a1949557d7b7b5bdd067dc11e820acb51e9fa0f86a6564ce4539a7b61"} Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.425122 4687 generic.go:334] "Generic (PLEG): container finished" podID="17f93f38-eae8-494e-b879-4240e2712982" containerID="db43786bd30daaa83a89bb2fc5fdd3f1f89abb876dd5360c985b47a79dcbfd49" exitCode=0 Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.425221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" event={"ID":"17f93f38-eae8-494e-b879-4240e2712982","Type":"ContainerDied","Data":"db43786bd30daaa83a89bb2fc5fdd3f1f89abb876dd5360c985b47a79dcbfd49"} Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.504845 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 14:37:06.370744064 +0000 UTC Mar 14 09:01:46 crc kubenswrapper[4687]: I0314 09:01:46.504909 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6557h35m19.865839174s for next certificate rotation Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.505693 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-14 19:53:04.260762956 +0000 UTC Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.505947 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 5890h51m16.754820461s for next certificate rotation Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.690672 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.888015 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhqnh\" (UniqueName: \"kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh\") pod \"17f93f38-eae8-494e-b879-4240e2712982\" (UID: \"17f93f38-eae8-494e-b879-4240e2712982\") " Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.893664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh" (OuterVolumeSpecName: "kube-api-access-zhqnh") pod "17f93f38-eae8-494e-b879-4240e2712982" (UID: "17f93f38-eae8-494e-b879-4240e2712982"). InnerVolumeSpecName "kube-api-access-zhqnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:47 crc kubenswrapper[4687]: I0314 09:01:47.989474 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhqnh\" (UniqueName: \"kubernetes.io/projected/17f93f38-eae8-494e-b879-4240e2712982-kube-api-access-zhqnh\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:48 crc kubenswrapper[4687]: I0314 09:01:48.438512 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerStarted","Data":"079df4bac710aecd9de8751bce983fea8c81426f5a22f7c6b9454c5a06d8240a"} Mar 14 09:01:48 crc kubenswrapper[4687]: I0314 09:01:48.440153 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" event={"ID":"17f93f38-eae8-494e-b879-4240e2712982","Type":"ContainerDied","Data":"3ddb8e89843b3746619d2b6457bafb677911aaf2b1b238a67fb81474081e9ce5"} Mar 14 09:01:48 crc kubenswrapper[4687]: I0314 09:01:48.440405 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddb8e89843b3746619d2b6457bafb677911aaf2b1b238a67fb81474081e9ce5" Mar 14 09:01:48 crc kubenswrapper[4687]: I0314 09:01:48.440209 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-mm8hd" Mar 14 09:01:48 crc kubenswrapper[4687]: I0314 09:01:48.461684 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tn7f" podStartSLOduration=3.383045539 podStartE2EDuration="50.461652361s" podCreationTimestamp="2026-03-14 09:00:58 +0000 UTC" firstStartedPulling="2026-03-14 09:01:00.075305246 +0000 UTC m=+245.063545611" lastFinishedPulling="2026-03-14 09:01:47.153912058 +0000 UTC m=+292.142152433" observedRunningTime="2026-03-14 09:01:48.456098675 +0000 UTC m=+293.444339050" watchObservedRunningTime="2026-03-14 09:01:48.461652361 +0000 UTC m=+293.449892756" Mar 14 09:01:49 crc kubenswrapper[4687]: I0314 09:01:49.452129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerStarted","Data":"d561ee793aff1f750be1237a59ae4cb40b9bae9cc2188082ffb128f88bd17b13"} Mar 14 09:01:49 crc kubenswrapper[4687]: I0314 09:01:49.476032 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qp9c4" podStartSLOduration=3.995672461 podStartE2EDuration="52.476015901s" podCreationTimestamp="2026-03-14 09:00:57 +0000 UTC" firstStartedPulling="2026-03-14 09:00:59.051153459 +0000 UTC m=+244.039393834" lastFinishedPulling="2026-03-14 09:01:47.531496899 +0000 UTC m=+292.519737274" observedRunningTime="2026-03-14 09:01:49.47311232 +0000 UTC m=+294.461352695" watchObservedRunningTime="2026-03-14 09:01:49.476015901 +0000 UTC m=+294.464256276" Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.111936 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.112235 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.112284 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.112820 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.112871 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e" gracePeriod=600 Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.482486 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e" exitCode=0 Mar 14 09:01:54 crc kubenswrapper[4687]: I0314 09:01:54.482541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.219479 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-khk5g"] Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.497666 4687 generic.go:334] "Generic (PLEG): container finished" podID="764f93b4-9c3b-400d-b508-4534689e51a7" containerID="52168436c557c2c957d7c933b54bb3a9328aa4dafbeef6f26977d04b37824820" exitCode=0 Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.497841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerDied","Data":"52168436c557c2c957d7c933b54bb3a9328aa4dafbeef6f26977d04b37824820"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.500778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerStarted","Data":"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.502758 4687 generic.go:334] "Generic (PLEG): container finished" podID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerID="89ad9bbd9730d1cb1f3d16967eb08c9b5c829850687440b54bee056d3b0ed5dd" exitCode=0 Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.502813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerDied","Data":"89ad9bbd9730d1cb1f3d16967eb08c9b5c829850687440b54bee056d3b0ed5dd"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.505606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerStarted","Data":"f32b87d3a5c7aba54844f1e513bf19d02331e97936066a810b20455665755ff9"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.508053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerStarted","Data":"5554fbf72d5640ab333841bf0c1ef3c08b9fcdf16cb2ea40b08e8dc0c2e10d93"} Mar 14 09:01:57 crc kubenswrapper[4687]: I0314 09:01:57.510357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead"} Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.037701 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.037758 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.280920 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.421665 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.422264 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.462049 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.520527 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerID="f32b87d3a5c7aba54844f1e513bf19d02331e97936066a810b20455665755ff9" exitCode=0 Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.520595 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerDied","Data":"f32b87d3a5c7aba54844f1e513bf19d02331e97936066a810b20455665755ff9"} Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.523160 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerID="5554fbf72d5640ab333841bf0c1ef3c08b9fcdf16cb2ea40b08e8dc0c2e10d93" exitCode=0 Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.523758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerDied","Data":"5554fbf72d5640ab333841bf0c1ef3c08b9fcdf16cb2ea40b08e8dc0c2e10d93"} Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.532719 4687 generic.go:334] "Generic (PLEG): container finished" podID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerID="6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065" exitCode=0 Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.534462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerDied","Data":"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065"} Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.587531 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:01:58 crc kubenswrapper[4687]: I0314 09:01:58.593575 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:01:59 crc kubenswrapper[4687]: I0314 09:01:59.539732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerStarted","Data":"f235b8efd57ed0ab26d8877ee0af72d8f40734d85d0f17fd129b36e9b7856222"} Mar 14 09:01:59 crc kubenswrapper[4687]: I0314 09:01:59.543229 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerStarted","Data":"fd9f6f300484e3d631bbfe2f40013a8d55fd5a407f2dbf170b6d0a09ae43002a"} Mar 14 09:01:59 crc kubenswrapper[4687]: I0314 09:01:59.557186 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8w67" podStartSLOduration=4.211517038 podStartE2EDuration="1m5.557169386s" podCreationTimestamp="2026-03-14 09:00:54 +0000 UTC" firstStartedPulling="2026-03-14 09:00:56.972307527 +0000 UTC m=+241.960547902" lastFinishedPulling="2026-03-14 09:01:58.317959875 +0000 UTC m=+303.306200250" observedRunningTime="2026-03-14 09:01:59.55407678 +0000 UTC m=+304.542317145" watchObservedRunningTime="2026-03-14 09:01:59.557169386 +0000 UTC m=+304.545409761" Mar 14 09:01:59 crc kubenswrapper[4687]: I0314 09:01:59.570229 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x58t9" podStartSLOduration=3.32382603 podStartE2EDuration="1m3.570212194s" podCreationTimestamp="2026-03-14 09:00:56 +0000 UTC" firstStartedPulling="2026-03-14 09:00:58.032843025 +0000 UTC m=+243.021083400" lastFinishedPulling="2026-03-14 09:01:58.279229189 +0000 UTC m=+303.267469564" observedRunningTime="2026-03-14 09:01:59.569509057 +0000 UTC m=+304.557749432" watchObservedRunningTime="2026-03-14 09:01:59.570212194 +0000 UTC m=+304.558452559" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.137539 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557982-4tcmh"] Mar 14 09:02:00 crc kubenswrapper[4687]: E0314 09:02:00.137778 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f93f38-eae8-494e-b879-4240e2712982" containerName="oc" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.137790 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f93f38-eae8-494e-b879-4240e2712982" containerName="oc" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.137918 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f93f38-eae8-494e-b879-4240e2712982" containerName="oc" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.138277 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.139954 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.140250 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.142314 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.148016 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-4tcmh"] Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.167298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvw4x\" (UniqueName: \"kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x\") pod \"auto-csr-approver-29557982-4tcmh\" (UID: \"ef7633ff-4aeb-4906-93e2-446a680ea1d2\") " pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.269241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4x\" (UniqueName: \"kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x\") pod \"auto-csr-approver-29557982-4tcmh\" (UID: \"ef7633ff-4aeb-4906-93e2-446a680ea1d2\") " pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.289865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4x\" (UniqueName: \"kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x\") pod \"auto-csr-approver-29557982-4tcmh\" (UID: \"ef7633ff-4aeb-4906-93e2-446a680ea1d2\") " pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:00 crc kubenswrapper[4687]: I0314 09:02:00.453975 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:02 crc kubenswrapper[4687]: I0314 09:02:02.361771 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:02:02 crc kubenswrapper[4687]: I0314 09:02:02.362465 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tn7f" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="registry-server" containerID="cri-o://079df4bac710aecd9de8751bce983fea8c81426f5a22f7c6b9454c5a06d8240a" gracePeriod=2 Mar 14 09:02:03 crc kubenswrapper[4687]: I0314 09:02:03.563479 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerID="079df4bac710aecd9de8751bce983fea8c81426f5a22f7c6b9454c5a06d8240a" exitCode=0 Mar 14 09:02:03 crc kubenswrapper[4687]: I0314 09:02:03.563528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerDied","Data":"079df4bac710aecd9de8751bce983fea8c81426f5a22f7c6b9454c5a06d8240a"} Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.452652 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.532029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities\") pod \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.532093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content\") pod \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.532121 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6vj\" (UniqueName: \"kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj\") pod \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\" (UID: \"2ad6f885-26fe-42ec-9cd8-3578b5ce5574\") " Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.533075 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities" (OuterVolumeSpecName: "utilities") pod "2ad6f885-26fe-42ec-9cd8-3578b5ce5574" (UID: "2ad6f885-26fe-42ec-9cd8-3578b5ce5574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.542444 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj" (OuterVolumeSpecName: "kube-api-access-cd6vj") pod "2ad6f885-26fe-42ec-9cd8-3578b5ce5574" (UID: "2ad6f885-26fe-42ec-9cd8-3578b5ce5574"). InnerVolumeSpecName "kube-api-access-cd6vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.571810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tn7f" event={"ID":"2ad6f885-26fe-42ec-9cd8-3578b5ce5574","Type":"ContainerDied","Data":"b662e05dccda23fc72526ac8967e91f8b8fd2d6b943efd299d3590dae10c7946"} Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.571874 4687 scope.go:117] "RemoveContainer" containerID="079df4bac710aecd9de8751bce983fea8c81426f5a22f7c6b9454c5a06d8240a" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.571876 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tn7f" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.633576 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:04 crc kubenswrapper[4687]: I0314 09:02:04.633616 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6vj\" (UniqueName: \"kubernetes.io/projected/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-kube-api-access-cd6vj\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.012142 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad6f885-26fe-42ec-9cd8-3578b5ce5574" (UID: "2ad6f885-26fe-42ec-9cd8-3578b5ce5574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.038595 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad6f885-26fe-42ec-9cd8-3578b5ce5574-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.158856 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.158902 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.203325 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.207542 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tn7f"] Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.212420 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.614480 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:02:05 crc kubenswrapper[4687]: I0314 09:02:05.747314 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" path="/var/lib/kubelet/pods/2ad6f885-26fe-42ec-9cd8-3578b5ce5574/volumes" Mar 14 09:02:06 crc kubenswrapper[4687]: I0314 09:02:06.479641 4687 scope.go:117] "RemoveContainer" containerID="e848835a1949557d7b7b5bdd067dc11e820acb51e9fa0f86a6564ce4539a7b61" Mar 14 09:02:07 crc kubenswrapper[4687]: I0314 09:02:07.038036 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:02:07 crc kubenswrapper[4687]: I0314 09:02:07.038285 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:02:07 crc kubenswrapper[4687]: I0314 09:02:07.077967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:02:07 crc kubenswrapper[4687]: I0314 09:02:07.623714 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:02:09 crc kubenswrapper[4687]: I0314 09:02:09.137031 4687 scope.go:117] "RemoveContainer" containerID="9dedded0c3cb308404927e5913300a80355274ec8e7b5e6e2385791cfae37c4a" Mar 14 09:02:09 crc kubenswrapper[4687]: I0314 09:02:09.521596 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-4tcmh"] Mar 14 09:02:10 crc kubenswrapper[4687]: W0314 09:02:10.953381 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7633ff_4aeb_4906_93e2_446a680ea1d2.slice/crio-74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be WatchSource:0}: Error finding container 74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be: Status 404 returned error can't find the container with id 74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be Mar 14 09:02:11 crc kubenswrapper[4687]: I0314 09:02:11.679110 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" event={"ID":"ef7633ff-4aeb-4906-93e2-446a680ea1d2","Type":"ContainerStarted","Data":"74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be"} Mar 14 09:02:11 crc kubenswrapper[4687]: I0314 09:02:11.886181 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.263594 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.263839 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" podUID="a253c1c3-19cb-4951-9790-d284412e93c2" containerName="controller-manager" containerID="cri-o://db66910abda508e8e4c281189397481072fd0627180b883c2ab1e3353dd4fbcd" gracePeriod=30 Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.370621 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.370892 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerName="route-controller-manager" containerID="cri-o://730f26fa7a7b9e17c7280f1d2ef0c3f9c95a296889201169efe771de8bac2c05" gracePeriod=30 Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.888900 4687 patch_prober.go:28] interesting pod/route-controller-manager-5d84cbd8f8-d5hv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Mar 14 09:02:13 crc kubenswrapper[4687]: I0314 09:02:13.889303 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.696200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerStarted","Data":"e8dde221a5869c5f7d4dc028c02b9a58db25b746c4091a299151105017abba5f"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.713015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerStarted","Data":"371e5a3d2d04faccd040f043b3bf6716feb4d5ceacc35504ee162404861ad529"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.717491 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p62jj" podStartSLOduration=3.334738902 podStartE2EDuration="1m17.717476071s" podCreationTimestamp="2026-03-14 09:00:57 +0000 UTC" firstStartedPulling="2026-03-14 09:00:59.047868947 +0000 UTC m=+244.036109322" lastFinishedPulling="2026-03-14 09:02:13.430606116 +0000 UTC m=+318.418846491" observedRunningTime="2026-03-14 09:02:14.712807306 +0000 UTC m=+319.701047681" watchObservedRunningTime="2026-03-14 09:02:14.717476071 +0000 UTC m=+319.705716446" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.745573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerStarted","Data":"375c35f6d2a11a0fb507d5b55a2a937abb04fecc58f3076413bc94be5f0fd1f0"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.750961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerStarted","Data":"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.757043 4687 generic.go:334] "Generic (PLEG): container finished" podID="a253c1c3-19cb-4951-9790-d284412e93c2" containerID="db66910abda508e8e4c281189397481072fd0627180b883c2ab1e3353dd4fbcd" exitCode=0 Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.757165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" event={"ID":"a253c1c3-19cb-4951-9790-d284412e93c2","Type":"ContainerDied","Data":"db66910abda508e8e4c281189397481072fd0627180b883c2ab1e3353dd4fbcd"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.760873 4687 generic.go:334] "Generic (PLEG): container finished" podID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerID="730f26fa7a7b9e17c7280f1d2ef0c3f9c95a296889201169efe771de8bac2c05" exitCode=0 Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.760900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" event={"ID":"f8d9969d-4881-43b0-a8d6-790e31ae3ea6","Type":"ContainerDied","Data":"730f26fa7a7b9e17c7280f1d2ef0c3f9c95a296889201169efe771de8bac2c05"} Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.765470 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chn6l" podStartSLOduration=7.562675018 podStartE2EDuration="1m19.765455032s" podCreationTimestamp="2026-03-14 09:00:55 +0000 UTC" firstStartedPulling="2026-03-14 09:00:56.937933196 +0000 UTC m=+241.926173571" lastFinishedPulling="2026-03-14 09:02:09.1407132 +0000 UTC m=+314.128953585" observedRunningTime="2026-03-14 09:02:14.736256189 +0000 UTC m=+319.724496564" watchObservedRunningTime="2026-03-14 09:02:14.765455032 +0000 UTC m=+319.753695407" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.898801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.922739 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8f9h8" podStartSLOduration=3.694043315 podStartE2EDuration="1m20.922721872s" podCreationTimestamp="2026-03-14 09:00:54 +0000 UTC" firstStartedPulling="2026-03-14 09:00:56.9558644 +0000 UTC m=+241.944104775" lastFinishedPulling="2026-03-14 09:02:14.184542957 +0000 UTC m=+319.172783332" observedRunningTime="2026-03-14 09:02:14.784588619 +0000 UTC m=+319.772828994" watchObservedRunningTime="2026-03-14 09:02:14.922721872 +0000 UTC m=+319.910962247" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.934725 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:02:14 crc kubenswrapper[4687]: E0314 09:02:14.934938 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="registry-server" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.934951 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="registry-server" Mar 14 09:02:14 crc kubenswrapper[4687]: E0314 09:02:14.934965 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="extract-content" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.934971 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="extract-content" Mar 14 09:02:14 crc kubenswrapper[4687]: E0314 09:02:14.934986 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="extract-utilities" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.934992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="extract-utilities" Mar 14 09:02:14 crc kubenswrapper[4687]: E0314 09:02:14.935006 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a253c1c3-19cb-4951-9790-d284412e93c2" containerName="controller-manager" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.935012 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a253c1c3-19cb-4951-9790-d284412e93c2" containerName="controller-manager" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.935094 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad6f885-26fe-42ec-9cd8-3578b5ce5574" containerName="registry-server" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.935104 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a253c1c3-19cb-4951-9790-d284412e93c2" containerName="controller-manager" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.935459 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.946889 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.961354 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.969745 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fp7h\" (UniqueName: \"kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h\") pod \"a253c1c3-19cb-4951-9790-d284412e93c2\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.969848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert\") pod \"a253c1c3-19cb-4951-9790-d284412e93c2\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.969876 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config\") pod \"a253c1c3-19cb-4951-9790-d284412e93c2\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.969930 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles\") pod \"a253c1c3-19cb-4951-9790-d284412e93c2\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.969951 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca\") pod \"a253c1c3-19cb-4951-9790-d284412e93c2\" (UID: \"a253c1c3-19cb-4951-9790-d284412e93c2\") " Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970150 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2lb\" (UniqueName: \"kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970659 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "a253c1c3-19cb-4951-9790-d284412e93c2" (UID: "a253c1c3-19cb-4951-9790-d284412e93c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970772 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a253c1c3-19cb-4951-9790-d284412e93c2" (UID: "a253c1c3-19cb-4951-9790-d284412e93c2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.970787 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config" (OuterVolumeSpecName: "config") pod "a253c1c3-19cb-4951-9790-d284412e93c2" (UID: "a253c1c3-19cb-4951-9790-d284412e93c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.975523 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a253c1c3-19cb-4951-9790-d284412e93c2" (UID: "a253c1c3-19cb-4951-9790-d284412e93c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:14 crc kubenswrapper[4687]: I0314 09:02:14.978509 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h" (OuterVolumeSpecName: "kube-api-access-2fp7h") pod "a253c1c3-19cb-4951-9790-d284412e93c2" (UID: "a253c1c3-19cb-4951-9790-d284412e93c2"). InnerVolumeSpecName "kube-api-access-2fp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.070927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config\") pod \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.070999 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4mww\" (UniqueName: \"kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww\") pod \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071020 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca\") pod \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071036 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert\") pod \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\" (UID: \"f8d9969d-4881-43b0-a8d6-790e31ae3ea6\") " Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071204 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071232 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2lb\" (UniqueName: \"kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071374 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071385 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fp7h\" (UniqueName: \"kubernetes.io/projected/a253c1c3-19cb-4951-9790-d284412e93c2-kube-api-access-2fp7h\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071394 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a253c1c3-19cb-4951-9790-d284412e93c2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071402 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071410 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a253c1c3-19cb-4951-9790-d284412e93c2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.071851 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config" (OuterVolumeSpecName: "config") pod "f8d9969d-4881-43b0-a8d6-790e31ae3ea6" (UID: "f8d9969d-4881-43b0-a8d6-790e31ae3ea6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.073536 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww" (OuterVolumeSpecName: "kube-api-access-v4mww") pod "f8d9969d-4881-43b0-a8d6-790e31ae3ea6" (UID: "f8d9969d-4881-43b0-a8d6-790e31ae3ea6"). InnerVolumeSpecName "kube-api-access-v4mww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.073927 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8d9969d-4881-43b0-a8d6-790e31ae3ea6" (UID: "f8d9969d-4881-43b0-a8d6-790e31ae3ea6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.076100 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8d9969d-4881-43b0-a8d6-790e31ae3ea6" (UID: "f8d9969d-4881-43b0-a8d6-790e31ae3ea6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.077999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.079158 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.079488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.080037 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.089498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2lb\" (UniqueName: \"kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb\") pod \"controller-manager-58f79c4f57-wcppw\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.172306 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.172351 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4mww\" (UniqueName: \"kubernetes.io/projected/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-kube-api-access-v4mww\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.172362 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.172373 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d9969d-4881-43b0-a8d6-790e31ae3ea6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.238490 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.238542 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.298270 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.505687 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.505978 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.781621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" event={"ID":"a253c1c3-19cb-4951-9790-d284412e93c2","Type":"ContainerDied","Data":"84e6883bd9d171201c84c4b56161cced1f710f24b328ff8e8e41a91101c79669"} Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.781654 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78cf64c84b-59bwb" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.781678 4687 scope.go:117] "RemoveContainer" containerID="db66910abda508e8e4c281189397481072fd0627180b883c2ab1e3353dd4fbcd" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.792086 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.792090 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2" event={"ID":"f8d9969d-4881-43b0-a8d6-790e31ae3ea6","Type":"ContainerDied","Data":"c3ef17c5d62ded68b2fb31f71edbf5c791dacf2a727a7077661486f0bedaf40d"} Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.795455 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8b47421-912e-4faa-b3ed-33881459d76e" containerID="375c35f6d2a11a0fb507d5b55a2a937abb04fecc58f3076413bc94be5f0fd1f0" exitCode=0 Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.798768 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerDied","Data":"375c35f6d2a11a0fb507d5b55a2a937abb04fecc58f3076413bc94be5f0fd1f0"} Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.805463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" event={"ID":"ef7633ff-4aeb-4906-93e2-446a680ea1d2","Type":"ContainerStarted","Data":"7a79d55e50cf92c9dde41b9eefebbb69b51c8de5b1cfc0c86b2e37a13e55a4bc"} Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.816689 4687 scope.go:117] "RemoveContainer" containerID="730f26fa7a7b9e17c7280f1d2ef0c3f9c95a296889201169efe771de8bac2c05" Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.825695 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.839666 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.839733 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78cf64c84b-59bwb"] Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.871352 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.880332 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d84cbd8f8-d5hv2"] Mar 14 09:02:15 crc kubenswrapper[4687]: I0314 09:02:15.886487 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" podStartSLOduration=13.625773401 podStartE2EDuration="15.886470336s" podCreationTimestamp="2026-03-14 09:02:00 +0000 UTC" firstStartedPulling="2026-03-14 09:02:12.777511788 +0000 UTC m=+317.765752163" lastFinishedPulling="2026-03-14 09:02:15.038208723 +0000 UTC m=+320.026449098" observedRunningTime="2026-03-14 09:02:15.885897743 +0000 UTC m=+320.874138138" watchObservedRunningTime="2026-03-14 09:02:15.886470336 +0000 UTC m=+320.874710721" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.275671 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8f9h8" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="registry-server" probeResult="failure" output=< Mar 14 09:02:16 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:02:16 crc kubenswrapper[4687]: > Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.468080 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.468451 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerName="route-controller-manager" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.468473 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerName="route-controller-manager" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.468644 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" containerName="route-controller-manager" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469031 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469162 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469308 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe" gracePeriod=15 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469369 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62" gracePeriod=15 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469374 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://39509f276273ece1deeff2f96ae4e69e1047642c297c704a9b637602df040bc1" gracePeriod=15 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469420 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac" gracePeriod=15 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.469431 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88" gracePeriod=15 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470003 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470312 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470325 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470351 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470358 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470366 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470372 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470378 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470384 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470396 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470403 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470411 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470417 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470426 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470441 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470447 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470455 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470463 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470559 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470568 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470575 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470584 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470593 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470599 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470607 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470613 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470734 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470744 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.470754 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470760 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.470865 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.471037 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.501899 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.553433 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-chn6l" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="registry-server" probeResult="failure" output=< Mar 14 09:02:16 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:02:16 crc kubenswrapper[4687]: > Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592343 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592455 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592676 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.592785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.677020 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.677091 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694561 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694602 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694617 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694673 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694704 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694636 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694807 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694835 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.694940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.799042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.812172 4687 generic.go:334] "Generic (PLEG): container finished" podID="aeb28798-769f-4a7a-8da8-a5213458d060" containerID="719d56cd6a31d2cfc089557383c95f59df2910ef9684395f33ede4928b7a6945" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.812243 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"aeb28798-769f-4a7a-8da8-a5213458d060","Type":"ContainerDied","Data":"719d56cd6a31d2cfc089557383c95f59df2910ef9684395f33ede4928b7a6945"} Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.813179 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.813397 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.813617 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.815054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerStarted","Data":"c1b72f08b1ee5d7f7ed6bd748141dd20cb33168b8e40905b999160e2083fd4a2"} Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.815551 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.815750 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.815921 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.816156 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.817328 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/4.log" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818258 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818787 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39509f276273ece1deeff2f96ae4e69e1047642c297c704a9b637602df040bc1" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818809 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818816 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818823 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88" exitCode=2 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.818874 4687 scope.go:117] "RemoveContainer" containerID="aaefbd164baf3456f556cafa889858b7a75a247777e54cd1e37ce155b292f925" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.820933 4687 generic.go:334] "Generic (PLEG): container finished" podID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" containerID="7a79d55e50cf92c9dde41b9eefebbb69b51c8de5b1cfc0c86b2e37a13e55a4bc" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.821005 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" event={"ID":"ef7633ff-4aeb-4906-93e2-446a680ea1d2","Type":"ContainerDied","Data":"7a79d55e50cf92c9dde41b9eefebbb69b51c8de5b1cfc0c86b2e37a13e55a4bc"} Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.821587 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.821793 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.822018 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.822295 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.822712 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.824847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" event={"ID":"541fbea6-1d05-4713-b587-f601c80e24b0","Type":"ContainerStarted","Data":"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550"} Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.824892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" event={"ID":"541fbea6-1d05-4713-b587-f601c80e24b0","Type":"ContainerStarted","Data":"b8ca67a6c470f9252fb02c2d38501fefa4a61cce82a549a6882b5039595ca97e"} Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.826194 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.826309 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.826527 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.826701 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.826891 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.827135 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.827583 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.831639 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.832069 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.832280 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.832486 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.832670 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.832847 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.833015 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: W0314 09:02:16.844203 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bc96a6fc95d04a240065a077e415a7bdb49c0fead0eb384c5a775f1d064015b1 WatchSource:0}: Error finding container bc96a6fc95d04a240065a077e415a7bdb49c0fead0eb384c5a775f1d064015b1: Status 404 returned error can't find the container with id bc96a6fc95d04a240065a077e415a7bdb49c0fead0eb384c5a775f1d064015b1 Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.873909 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.874325 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.874565 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.874825 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.875184 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4687]: I0314 09:02:16.875245 4687 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 14 09:02:16 crc kubenswrapper[4687]: E0314 09:02:16.875818 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.077270 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.457022 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e7347cd015fe2e956aa8ab4c3758a0837ed15b29db0dd672f337493d8798dc6e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f7aaa01b9e154271df658a6780ab0f34ff1c1e4be33ee30fa79fe6062902b6a4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1248441052},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:bcf9c3708ebdf30e8320c2f533fdc7298d06d5a977624dbafd8dcc2b59486f7d\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eb4aa47a75427b0a945a5db102beb4b10d3f5d5fe3f04d94452e1638d7153a69\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221742299},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.457459 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.457898 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.458189 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.458760 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.458784 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 09:02:17 crc kubenswrapper[4687]: E0314 09:02:17.478161 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.747979 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a253c1c3-19cb-4951-9790-d284412e93c2" path="/var/lib/kubelet/pods/a253c1c3-19cb-4951-9790-d284412e93c2/volumes" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.748480 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d9969d-4881-43b0-a8d6-790e31ae3ea6" path="/var/lib/kubelet/pods/f8d9969d-4881-43b0-a8d6-790e31ae3ea6/volumes" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.785834 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.785945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.828902 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.829834 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.830167 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.830353 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.830519 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.830690 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.830897 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.832883 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.835327 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d661aacbb1cf24ca7c2e150ee3ffa47c0e5d20907db4beaa5617d1466cb69ef7"} Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.835395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc96a6fc95d04a240065a077e415a7bdb49c0fead0eb384c5a775f1d064015b1"} Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.836125 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.836482 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.837019 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.837617 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.839466 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:17 crc kubenswrapper[4687]: I0314 09:02:17.839951 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.138626 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.139466 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.139897 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.140178 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.140466 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.140694 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.140935 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.143761 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.144281 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.144692 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.144968 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.145254 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.145504 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.145764 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211226 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvw4x\" (UniqueName: \"kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x\") pod \"ef7633ff-4aeb-4906-93e2-446a680ea1d2\" (UID: \"ef7633ff-4aeb-4906-93e2-446a680ea1d2\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211272 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock\") pod \"aeb28798-769f-4a7a-8da8-a5213458d060\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211405 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access\") pod \"aeb28798-769f-4a7a-8da8-a5213458d060\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211460 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir\") pod \"aeb28798-769f-4a7a-8da8-a5213458d060\" (UID: \"aeb28798-769f-4a7a-8da8-a5213458d060\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock" (OuterVolumeSpecName: "var-lock") pod "aeb28798-769f-4a7a-8da8-a5213458d060" (UID: "aeb28798-769f-4a7a-8da8-a5213458d060"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aeb28798-769f-4a7a-8da8-a5213458d060" (UID: "aeb28798-769f-4a7a-8da8-a5213458d060"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211720 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.211732 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aeb28798-769f-4a7a-8da8-a5213458d060-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.215990 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x" (OuterVolumeSpecName: "kube-api-access-gvw4x") pod "ef7633ff-4aeb-4906-93e2-446a680ea1d2" (UID: "ef7633ff-4aeb-4906-93e2-446a680ea1d2"). InnerVolumeSpecName "kube-api-access-gvw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.216009 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aeb28798-769f-4a7a-8da8-a5213458d060" (UID: "aeb28798-769f-4a7a-8da8-a5213458d060"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: E0314 09:02:18.279109 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.313358 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvw4x\" (UniqueName: \"kubernetes.io/projected/ef7633ff-4aeb-4906-93e2-446a680ea1d2-kube-api-access-gvw4x\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.313411 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb28798-769f-4a7a-8da8-a5213458d060-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.842200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" event={"ID":"ef7633ff-4aeb-4906-93e2-446a680ea1d2","Type":"ContainerDied","Data":"74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be"} Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.842504 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74bce9a163996f1ebc12e3f2ae96b025a22ff3e77e68b57be32920c322cfd5be" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.842228 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.843629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"aeb28798-769f-4a7a-8da8-a5213458d060","Type":"ContainerDied","Data":"e7785c7b2aef00783b7f4b87350e0e8b7e33aacd9e7c86eccf681bf0b6bbc6c3"} Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.843664 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7785c7b2aef00783b7f4b87350e0e8b7e33aacd9e7c86eccf681bf0b6bbc6c3" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.843716 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.846466 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.847119 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe" exitCode=0 Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.857982 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.858451 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.858694 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.859087 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.859479 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.859738 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.861118 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.861391 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.861608 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.861820 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.862008 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.862210 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.864904 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.865600 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.865878 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.866132 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.866385 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.866571 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.866769 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.867018 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.867225 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921100 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921213 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921296 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921365 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921503 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921515 4687 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:18 crc kubenswrapper[4687]: I0314 09:02:18.921523 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.743371 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.854673 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.855780 4687 scope.go:117] "RemoveContainer" containerID="39509f276273ece1deeff2f96ae4e69e1047642c297c704a9b637602df040bc1" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.855870 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.857416 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.858121 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.858923 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.859213 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.859658 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.859990 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.860386 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.861319 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.861700 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.862656 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.863157 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.863510 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.863878 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.864275 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.871479 4687 scope.go:117] "RemoveContainer" containerID="ee2faaab2797ead9819d9f578ea2575db80c2e668475595ba1c66dc58b7997ac" Mar 14 09:02:19 crc kubenswrapper[4687]: E0314 09:02:19.880092 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.885363 4687 scope.go:117] "RemoveContainer" containerID="a60b976c2e529b0cb1c804706e37189ded783fe576bc4a588e4a922b5ced1c62" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.900028 4687 scope.go:117] "RemoveContainer" containerID="5112867e501fb6e47829e3fb9b138206e865231dce0df43a212ee86779423d88" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.919526 4687 scope.go:117] "RemoveContainer" containerID="9e9da3329f3aabac4eda1e44a838b73ab50c99f54b556823ac991fd512867bbe" Mar 14 09:02:19 crc kubenswrapper[4687]: I0314 09:02:19.936392 4687 scope.go:117] "RemoveContainer" containerID="f3605026b51acaad43fb5f7b42464c72faa534ecfb111c0b06e7ce29041af784" Mar 14 09:02:21 crc kubenswrapper[4687]: E0314 09:02:21.515986 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-sgxm6.189ca9b24b87ad00 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-sgxm6,UID:a8b47421-912e-4faa-b3ed-33881459d76e,APIVersion:v1,ResourceVersion:28532,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 696ms (696ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 09:02:16.51358848 +0000 UTC m=+321.501828855,LastTimestamp:2026-03-14 09:02:16.51358848 +0000 UTC m=+321.501828855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.250673 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerName="oauth-openshift" containerID="cri-o://5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6" gracePeriod=15 Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.705774 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.706806 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.707078 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.707378 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.707613 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.707870 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.708198 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.708768 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.769181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.769256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.769296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tlp\" (UniqueName: \"kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.769327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.770149 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.770702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771316 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771351 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771373 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771410 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca\") pod \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\" (UID: \"5d0c6355-63a3-4e53-b8f6-283e5ef456ed\") " Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771756 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.771774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.772176 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.773116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.774987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.775047 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.775620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp" (OuterVolumeSpecName: "kube-api-access-t7tlp") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "kube-api-access-t7tlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.775668 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.775936 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.776107 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.776192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.776279 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.781842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.782489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5d0c6355-63a3-4e53-b8f6-283e5ef456ed" (UID: "5d0c6355-63a3-4e53-b8f6-283e5ef456ed"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872627 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872656 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872667 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872677 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tlp\" (UniqueName: \"kubernetes.io/projected/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-kube-api-access-t7tlp\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872688 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872699 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872709 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872718 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872727 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872736 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872745 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872754 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.872763 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d0c6355-63a3-4e53-b8f6-283e5ef456ed-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.875575 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerID="5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.875624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" event={"ID":"5d0c6355-63a3-4e53-b8f6-283e5ef456ed","Type":"ContainerDied","Data":"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6"} Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.875643 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.875662 4687 scope.go:117] "RemoveContainer" containerID="5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.875651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" event={"ID":"5d0c6355-63a3-4e53-b8f6-283e5ef456ed","Type":"ContainerDied","Data":"0496dd3ee038ceaafda16e6e907c77a2bac31f94ef986a664dd754f52986c961"} Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.876442 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.876697 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.877018 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.877480 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.877975 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.878255 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.878605 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.892389 4687 scope.go:117] "RemoveContainer" containerID="5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6" Mar 14 09:02:22 crc kubenswrapper[4687]: E0314 09:02:22.892857 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6\": container with ID starting with 5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6 not found: ID does not exist" containerID="5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.892918 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6"} err="failed to get container status \"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6\": rpc error: code = NotFound desc = could not find container \"5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6\": container with ID starting with 5d9327b2e466194065091860cbedb13fd67d4128b55ae6791f823b9c0d9e63c6 not found: ID does not exist" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.893080 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.893361 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.893755 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.894000 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.894260 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.894554 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:22 crc kubenswrapper[4687]: I0314 09:02:22.894820 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:23 crc kubenswrapper[4687]: E0314 09:02:23.081236 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="6.4s" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.078958 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.079299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.117995 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.118504 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.118823 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.119387 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.119828 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.120184 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.120434 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.120750 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.302820 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.303501 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.303880 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.304414 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.304900 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.305390 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.305785 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.306082 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.306481 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.352443 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.352894 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.353108 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.353477 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.353977 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.354472 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.354880 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.355251 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.355663 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.551506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.552105 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.552428 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.552717 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.553145 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.553589 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.553953 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.554453 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.554771 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.555120 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.585097 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.585771 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.586241 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.586750 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.587113 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.587631 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.588158 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.588655 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.589037 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.589564 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.739370 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.740043 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.740523 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.740790 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.741053 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.741496 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.741973 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.742309 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.742647 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: E0314 09:02:25.771850 4687 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" volumeName="registry-storage" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.938562 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.939148 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.939731 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.939962 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.940245 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.940729 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.940995 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.941365 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.941660 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:25 crc kubenswrapper[4687]: I0314 09:02:25.942108 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.799010 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:02:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e7347cd015fe2e956aa8ab4c3758a0837ed15b29db0dd672f337493d8798dc6e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f7aaa01b9e154271df658a6780ab0f34ff1c1e4be33ee30fa79fe6062902b6a4\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1248441052},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:bcf9c3708ebdf30e8320c2f533fdc7298d06d5a977624dbafd8dcc2b59486f7d\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eb4aa47a75427b0a945a5db102beb4b10d3f5d5fe3f04d94452e1638d7153a69\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221742299},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.799934 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.800313 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.800564 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.800724 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: E0314 09:02:27.800740 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.823703 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.824306 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.824708 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.825569 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.826056 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.826371 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.826779 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.827042 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.827298 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:27 crc kubenswrapper[4687]: I0314 09:02:27.827614 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:29 crc kubenswrapper[4687]: E0314 09:02:29.482227 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="7s" Mar 14 09:02:30 crc kubenswrapper[4687]: E0314 09:02:30.475020 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-sgxm6.189ca9b24b87ad00 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-sgxm6,UID:a8b47421-912e-4faa-b3ed-33881459d76e,APIVersion:v1,ResourceVersion:28532,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 696ms (696ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 09:02:16.51358848 +0000 UTC m=+321.501828855,LastTimestamp:2026-03-14 09:02:16.51358848 +0000 UTC m=+321.501828855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.736521 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.737644 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.741528 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.741924 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.742137 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.742470 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.742890 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.743193 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.743937 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.744239 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.752700 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.752735 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:31 crc kubenswrapper[4687]: E0314 09:02:31.753152 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.753670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:31 crc kubenswrapper[4687]: W0314 09:02:31.771404 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-81dc3417f10355e346ce3528888d252762d5086bbcf283467cffdc394cc75288 WatchSource:0}: Error finding container 81dc3417f10355e346ce3528888d252762d5086bbcf283467cffdc394cc75288: Status 404 returned error can't find the container with id 81dc3417f10355e346ce3528888d252762d5086bbcf283467cffdc394cc75288 Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.905515 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.905571 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.939798 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81dc3417f10355e346ce3528888d252762d5086bbcf283467cffdc394cc75288"} Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.941634 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.942920 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.942954 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa" exitCode=1 Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.942973 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa"} Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.943429 4687 scope.go:117] "RemoveContainer" containerID="14a69729126b123c9f64ca9d9fb1aba89be2981979f58077817e84c3f56ae5aa" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.943599 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.943941 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.944354 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.944627 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.944944 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.945163 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.945421 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.945671 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.945899 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:31 crc kubenswrapper[4687]: I0314 09:02:31.946183 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.949977 4687 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3a10e60c54e844c8b377efcaacc37847372b187eec0469d5934658953e927c22" exitCode=0 Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.950247 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.950266 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.950028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3a10e60c54e844c8b377efcaacc37847372b187eec0469d5934658953e927c22"} Mar 14 09:02:32 crc kubenswrapper[4687]: E0314 09:02:32.950488 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.950485 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.951114 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.951388 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.951597 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.951805 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.952041 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.952284 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.952484 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.952674 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.953066 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.953417 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.954782 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.954833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9158033c77106e6b44cff561845c2281b7212a3057fab0f528249b66e05b42fc"} Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.955913 4687 status_manager.go:851] "Failed to get status for pod" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" pod="openshift-marketplace/certified-operators-sgxm6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sgxm6\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.956537 4687 status_manager.go:851] "Failed to get status for pod" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" pod="openshift-marketplace/community-operators-8f9h8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8f9h8\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.956865 4687 status_manager.go:851] "Failed to get status for pod" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" pod="openshift-marketplace/certified-operators-chn6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chn6l\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.957293 4687 status_manager.go:851] "Failed to get status for pod" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.957593 4687 status_manager.go:851] "Failed to get status for pod" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" pod="openshift-infra/auto-csr-approver-29557982-4tcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29557982-4tcmh\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.957875 4687 status_manager.go:851] "Failed to get status for pod" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-58f79c4f57-wcppw\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.958121 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.958438 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.958684 4687 status_manager.go:851] "Failed to get status for pod" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" pod="openshift-marketplace/redhat-marketplace-p62jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p62jj\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:32 crc kubenswrapper[4687]: I0314 09:02:32.958959 4687 status_manager.go:851] "Failed to get status for pod" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" pod="openshift-authentication/oauth-openshift-558db77b4-khk5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-khk5g\": dial tcp 38.102.83.219:6443: connect: connection refused" Mar 14 09:02:33 crc kubenswrapper[4687]: I0314 09:02:33.967951 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85937c148b9eeb502c473042484e927dad9b8d1acaae4ab681a8350772543670"} Mar 14 09:02:33 crc kubenswrapper[4687]: I0314 09:02:33.968267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"acb8486dfd21d1f461e248b992c609a0c467413e0cc23166506bbd67d0be3d93"} Mar 14 09:02:33 crc kubenswrapper[4687]: I0314 09:02:33.968282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1385e940803a88cfc37fe47792330bd126454485f96b3edec93f8549b02a9c12"} Mar 14 09:02:34 crc kubenswrapper[4687]: I0314 09:02:34.993393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e10c1776718de5fcbf7604dd2784e634da9da2d5baa9e145eb5ce06f6e5dd1ba"} Mar 14 09:02:34 crc kubenswrapper[4687]: I0314 09:02:34.993719 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e51b5a5a007f9d0d6e49b4db5a069ba4a58016c6c415e2272e1b647b78d488e9"} Mar 14 09:02:34 crc kubenswrapper[4687]: I0314 09:02:34.993985 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:34 crc kubenswrapper[4687]: I0314 09:02:34.994001 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:34 crc kubenswrapper[4687]: I0314 09:02:34.994279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:36 crc kubenswrapper[4687]: I0314 09:02:36.753945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:36 crc kubenswrapper[4687]: I0314 09:02:36.756303 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:36 crc kubenswrapper[4687]: I0314 09:02:36.759173 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:40 crc kubenswrapper[4687]: I0314 09:02:40.008445 4687 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:40 crc kubenswrapper[4687]: I0314 09:02:40.177534 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="396b376c-4cb6-455b-8a3b-d3069b7ac21f" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.026537 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.027021 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.029154 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="396b376c-4cb6-455b-8a3b-d3069b7ac21f" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.577895 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.584496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:41 crc kubenswrapper[4687]: I0314 09:02:41.905675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:42 crc kubenswrapper[4687]: I0314 09:02:42.041415 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:49 crc kubenswrapper[4687]: I0314 09:02:49.859614 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.086558 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.090760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.112017 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.189860 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.423138 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.821293 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 09:02:50 crc kubenswrapper[4687]: I0314 09:02:50.946787 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.143999 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.573593 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.744590 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.816205 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.944528 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.965879 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 09:02:51 crc kubenswrapper[4687]: I0314 09:02:51.981992 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.040721 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.085109 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.212324 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.223262 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.314968 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.590695 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.655297 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.682818 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.703961 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.724394 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.771241 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.805269 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.817514 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 09:02:52 crc kubenswrapper[4687]: I0314 09:02:52.852084 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.120230 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.129231 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.168870 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.242281 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.259280 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.366720 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.466225 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.515152 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.519612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.553075 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.590264 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.634114 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.907226 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.926500 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.968157 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 09:02:53 crc kubenswrapper[4687]: I0314 09:02:53.969458 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.040268 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.063580 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.074059 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.083893 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.098998 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.187592 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.246851 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.287062 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.432303 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.659368 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.755903 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.858975 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 09:02:54 crc kubenswrapper[4687]: I0314 09:02:54.862668 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.022830 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.028316 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.039802 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.102593 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.108662 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.113219 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.133714 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.197371 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.211744 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.261892 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.272711 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.273208 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.322016 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.416031 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.423499 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.479715 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.514721 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.537268 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.763845 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.815041 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 09:02:55 crc kubenswrapper[4687]: I0314 09:02:55.850091 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.036961 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.053976 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.083605 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.086886 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.108440 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.109571 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.141555 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.172055 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.318311 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.334483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.367184 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.414223 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.492591 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.514248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.520961 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.540859 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.563073 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.678807 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.733181 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.750947 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.793994 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.794504 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 09:02:56 crc kubenswrapper[4687]: I0314 09:02:56.930605 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.024414 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.199924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.246928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.344003 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.373139 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.473784 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.529847 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.535352 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.817848 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.864197 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.871271 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.894525 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.930019 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.958423 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 09:02:57 crc kubenswrapper[4687]: I0314 09:02:57.993450 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.019140 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.156691 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.225612 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.243415 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.343004 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.402465 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.406475 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.486724 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.777773 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.793928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.794886 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.800372 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:02:58 crc kubenswrapper[4687]: I0314 09:02:58.983801 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.015581 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.042607 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.086882 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.094286 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.221401 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.271420 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.279191 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.295894 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.780602 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.806484 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.846396 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.914417 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.921655 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 09:02:59 crc kubenswrapper[4687]: I0314 09:02:59.942387 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.027705 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.035605 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.051575 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.082810 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.232072 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.379848 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.671796 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.781663 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.788129 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.811473 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.837766 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.847518 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.899786 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 09:03:00 crc kubenswrapper[4687]: I0314 09:03:00.993229 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.010412 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.063406 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.205369 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.308000 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.336924 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.340828 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.457575 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.472541 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.508832 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.577672 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.586291 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.594920 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.605206 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.702113 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.760088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.810433 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.810908 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.841268 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.842735 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 09:03:01 crc kubenswrapper[4687]: I0314 09:03:01.996316 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.137282 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.280618 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.387836 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.395965 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.408630 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.426374 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.426483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.506429 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.509716 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.553255 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.558705 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.626511 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.650796 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.894477 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.972047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 09:03:02 crc kubenswrapper[4687]: I0314 09:03:02.972259 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.082325 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.359856 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.420702 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.505498 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.629449 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.636743 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.724827 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.859711 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 09:03:03 crc kubenswrapper[4687]: I0314 09:03:03.904213 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.069042 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.379405 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.554155 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.562670 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.581201 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.626195 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.629041 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.767017 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 09:03:04 crc kubenswrapper[4687]: I0314 09:03:04.782623 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 09:03:05 crc kubenswrapper[4687]: I0314 09:03:05.132511 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 09:03:05 crc kubenswrapper[4687]: I0314 09:03:05.179118 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 09:03:05 crc kubenswrapper[4687]: I0314 09:03:05.576022 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 09:03:14 crc kubenswrapper[4687]: I0314 09:03:14.232604 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 09:03:19 crc kubenswrapper[4687]: I0314 09:03:19.019949 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 09:03:21 crc kubenswrapper[4687]: I0314 09:03:21.526100 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 09:03:21 crc kubenswrapper[4687]: I0314 09:03:21.933380 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 09:03:21 crc kubenswrapper[4687]: I0314 09:03:21.956642 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 09:03:24 crc kubenswrapper[4687]: I0314 09:03:24.066579 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 09:03:25 crc kubenswrapper[4687]: I0314 09:03:25.034224 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 09:03:25 crc kubenswrapper[4687]: I0314 09:03:25.272840 4687 generic.go:334] "Generic (PLEG): container finished" podID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerID="c3a8a25485abc852dcbab8b6a0eea628b95b7b17401b499efeb63a28552f3367" exitCode=0 Mar 14 09:03:25 crc kubenswrapper[4687]: I0314 09:03:25.272891 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerDied","Data":"c3a8a25485abc852dcbab8b6a0eea628b95b7b17401b499efeb63a28552f3367"} Mar 14 09:03:25 crc kubenswrapper[4687]: I0314 09:03:25.273609 4687 scope.go:117] "RemoveContainer" containerID="c3a8a25485abc852dcbab8b6a0eea628b95b7b17401b499efeb63a28552f3367" Mar 14 09:03:26 crc kubenswrapper[4687]: I0314 09:03:26.118193 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:03:26 crc kubenswrapper[4687]: I0314 09:03:26.279296 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerStarted","Data":"2f4e0d3a7a2b0e3d2ef11d1e6180835a030f363dbfdd285f3822a3f693d6d617"} Mar 14 09:03:26 crc kubenswrapper[4687]: I0314 09:03:26.279855 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:03:26 crc kubenswrapper[4687]: I0314 09:03:26.281064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:03:28 crc kubenswrapper[4687]: I0314 09:03:28.836644 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 09:03:29 crc kubenswrapper[4687]: I0314 09:03:29.713787 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 09:03:31 crc kubenswrapper[4687]: I0314 09:03:31.312634 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 09:03:32 crc kubenswrapper[4687]: I0314 09:03:32.960134 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 09:03:35 crc kubenswrapper[4687]: I0314 09:03:35.232473 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.266213 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.271442 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sgxm6" podStartSLOduration=84.723397947 podStartE2EDuration="2m44.271420347s" podCreationTimestamp="2026-03-14 09:00:54 +0000 UTC" firstStartedPulling="2026-03-14 09:00:56.96555263 +0000 UTC m=+241.953793005" lastFinishedPulling="2026-03-14 09:02:16.51357503 +0000 UTC m=+321.501815405" observedRunningTime="2026-03-14 09:02:40.117589464 +0000 UTC m=+345.105829839" watchObservedRunningTime="2026-03-14 09:03:38.271420347 +0000 UTC m=+403.259660732" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.272730 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=82.272721989 podStartE2EDuration="1m22.272721989s" podCreationTimestamp="2026-03-14 09:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:40.052763798 +0000 UTC m=+345.041004173" watchObservedRunningTime="2026-03-14 09:03:38.272721989 +0000 UTC m=+403.260962384" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.273417 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" podStartSLOduration=85.273400525 podStartE2EDuration="1m25.273400525s" podCreationTimestamp="2026-03-14 09:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:40.041649018 +0000 UTC m=+345.029889393" watchObservedRunningTime="2026-03-14 09:03:38.273400525 +0000 UTC m=+403.261640910" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275030 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-khk5g","openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275117 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66bf6f8d68-lxljr","openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg","openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:03:38 crc kubenswrapper[4687]: E0314 09:03:38.275586 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" containerName="oc" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275613 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" containerName="oc" Mar 14 09:03:38 crc kubenswrapper[4687]: E0314 09:03:38.275649 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerName="oauth-openshift" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275659 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerName="oauth-openshift" Mar 14 09:03:38 crc kubenswrapper[4687]: E0314 09:03:38.275679 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" containerName="installer" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275687 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" containerName="installer" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275945 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb28798-769f-4a7a-8da8-a5213458d060" containerName="installer" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275958 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" containerName="oc" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.275980 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" containerName="oauth-openshift" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.276712 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.276941 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="47ff72d2-1f06-49a9-a023-c792c80ad598" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.277466 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.279104 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.279253 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.279546 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" containerName="controller-manager" containerID="cri-o://7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550" gracePeriod=30 Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.279769 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.280026 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.283546 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.284104 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.284212 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.284422 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.284547 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.284620 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.286310 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.286359 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.286492 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.286500 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.286648 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.290552 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.294619 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.297946 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.306103 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.335534 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.342384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.342488 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=58.342464723 podStartE2EDuration="58.342464723s" podCreationTimestamp="2026-03-14 09:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:38.339022318 +0000 UTC m=+403.327262713" watchObservedRunningTime="2026-03-14 09:03:38.342464723 +0000 UTC m=+403.330705098" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-audit-policies\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365892 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.365914 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19013419-e035-4c87-9b56-6ecee521a3bd-audit-dir\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.366029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.366130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-error\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.366157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-login\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.366195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-session\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.366229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6d6c\" (UniqueName: \"kubernetes.io/projected/19013419-e035-4c87-9b56-6ecee521a3bd-kube-api-access-j6d6c\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.454380 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.458925 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.459035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.461364 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.461645 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.461828 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.462319 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.462589 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.463406 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468232 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468266 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-audit-policies\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468354 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19013419-e035-4c87-9b56-6ecee521a3bd-audit-dir\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-error\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-session\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468502 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-login\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468527 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6d6c\" (UniqueName: \"kubernetes.io/projected/19013419-e035-4c87-9b56-6ecee521a3bd-kube-api-access-j6d6c\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.468595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.469183 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19013419-e035-4c87-9b56-6ecee521a3bd-audit-dir\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.469970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.469998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-audit-policies\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.470932 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.471850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.474905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.475461 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-session\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.477443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.477685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.478810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.478824 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db6dccf-f86mg"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.483862 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.486393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-error\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.488690 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-user-template-login\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.490791 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6d6c\" (UniqueName: \"kubernetes.io/projected/19013419-e035-4c87-9b56-6ecee521a3bd-kube-api-access-j6d6c\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.497248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19013419-e035-4c87-9b56-6ecee521a3bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66bf6f8d68-lxljr\" (UID: \"19013419-e035-4c87-9b56-6ecee521a3bd\") " pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.570031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.570319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcchg\" (UniqueName: \"kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.570395 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.570446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.608247 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.669967 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.671956 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.672043 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcchg\" (UniqueName: \"kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.672072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.672111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.673381 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.679254 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.680059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.696283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcchg\" (UniqueName: \"kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg\") pod \"route-controller-manager-7d54789467-cqnmr\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.773619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles\") pod \"541fbea6-1d05-4713-b587-f601c80e24b0\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.773706 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config\") pod \"541fbea6-1d05-4713-b587-f601c80e24b0\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.773724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert\") pod \"541fbea6-1d05-4713-b587-f601c80e24b0\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.773780 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2lb\" (UniqueName: \"kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb\") pod \"541fbea6-1d05-4713-b587-f601c80e24b0\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.773807 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca\") pod \"541fbea6-1d05-4713-b587-f601c80e24b0\" (UID: \"541fbea6-1d05-4713-b587-f601c80e24b0\") " Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.776033 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "541fbea6-1d05-4713-b587-f601c80e24b0" (UID: "541fbea6-1d05-4713-b587-f601c80e24b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.776080 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "541fbea6-1d05-4713-b587-f601c80e24b0" (UID: "541fbea6-1d05-4713-b587-f601c80e24b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.776370 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config" (OuterVolumeSpecName: "config") pod "541fbea6-1d05-4713-b587-f601c80e24b0" (UID: "541fbea6-1d05-4713-b587-f601c80e24b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.778409 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb" (OuterVolumeSpecName: "kube-api-access-8x2lb") pod "541fbea6-1d05-4713-b587-f601c80e24b0" (UID: "541fbea6-1d05-4713-b587-f601c80e24b0"). InnerVolumeSpecName "kube-api-access-8x2lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.778648 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "541fbea6-1d05-4713-b587-f601c80e24b0" (UID: "541fbea6-1d05-4713-b587-f601c80e24b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.789744 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66bf6f8d68-lxljr"] Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.816689 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.875012 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.877201 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.906663 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541fbea6-1d05-4713-b587-f601c80e24b0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.906694 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2lb\" (UniqueName: \"kubernetes.io/projected/541fbea6-1d05-4713-b587-f601c80e24b0-kube-api-access-8x2lb\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:38 crc kubenswrapper[4687]: I0314 09:03:38.906705 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541fbea6-1d05-4713-b587-f601c80e24b0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.225308 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:03:39 crc kubenswrapper[4687]: W0314 09:03:39.226988 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4e7de_fa68_48a6_bac4_3320e494bc9b.slice/crio-cbedd906aaec679eec4e51329870b6a02eaf5a50ab314c53fffd6552d21fb55c WatchSource:0}: Error finding container cbedd906aaec679eec4e51329870b6a02eaf5a50ab314c53fffd6552d21fb55c: Status 404 returned error can't find the container with id cbedd906aaec679eec4e51329870b6a02eaf5a50ab314c53fffd6552d21fb55c Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.342449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" event={"ID":"18b4e7de-fa68-48a6-bac4-3320e494bc9b","Type":"ContainerStarted","Data":"cbedd906aaec679eec4e51329870b6a02eaf5a50ab314c53fffd6552d21fb55c"} Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.344345 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" event={"ID":"19013419-e035-4c87-9b56-6ecee521a3bd","Type":"ContainerStarted","Data":"f69798f99244bf3b455cb12245a3c392883d015809a3af8b63e7438dc1fd613a"} Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.344393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" event={"ID":"19013419-e035-4c87-9b56-6ecee521a3bd","Type":"ContainerStarted","Data":"3931f1b0ac898f979407381a28efc6484d52dd84be378cfd2d9bf430ae6669f2"} Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.344598 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.347507 4687 generic.go:334] "Generic (PLEG): container finished" podID="541fbea6-1d05-4713-b587-f601c80e24b0" containerID="7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550" exitCode=0 Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.347586 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.347571 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" event={"ID":"541fbea6-1d05-4713-b587-f601c80e24b0","Type":"ContainerDied","Data":"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550"} Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.347723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f79c4f57-wcppw" event={"ID":"541fbea6-1d05-4713-b587-f601c80e24b0","Type":"ContainerDied","Data":"b8ca67a6c470f9252fb02c2d38501fefa4a61cce82a549a6882b5039595ca97e"} Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.347758 4687 scope.go:117] "RemoveContainer" containerID="7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.373617 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" podStartSLOduration=102.373599294 podStartE2EDuration="1m42.373599294s" podCreationTimestamp="2026-03-14 09:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:39.370695123 +0000 UTC m=+404.358935538" watchObservedRunningTime="2026-03-14 09:03:39.373599294 +0000 UTC m=+404.361839669" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.385506 4687 scope.go:117] "RemoveContainer" containerID="7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550" Mar 14 09:03:39 crc kubenswrapper[4687]: E0314 09:03:39.386139 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550\": container with ID starting with 7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550 not found: ID does not exist" containerID="7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.386232 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550"} err="failed to get container status \"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550\": rpc error: code = NotFound desc = could not find container \"7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550\": container with ID starting with 7e546c65d409e6fe8d1d1d884e2fc4d6f75d22ad17e27556bee11c5124c91550 not found: ID does not exist" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.391773 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.395937 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58f79c4f57-wcppw"] Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.543462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66bf6f8d68-lxljr" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.744116 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" path="/var/lib/kubelet/pods/541fbea6-1d05-4713-b587-f601c80e24b0/volumes" Mar 14 09:03:39 crc kubenswrapper[4687]: I0314 09:03:39.744685 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0c6355-63a3-4e53-b8f6-283e5ef456ed" path="/var/lib/kubelet/pods/5d0c6355-63a3-4e53-b8f6-283e5ef456ed/volumes" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.354107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" event={"ID":"18b4e7de-fa68-48a6-bac4-3320e494bc9b","Type":"ContainerStarted","Data":"582bd188c1ffc71be26e7b79bcd0fdbfb7ef4c1db775914536421556047be21a"} Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.354381 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.358622 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.371209 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" podStartSLOduration=7.371193402 podStartE2EDuration="7.371193402s" podCreationTimestamp="2026-03-14 09:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:40.370045324 +0000 UTC m=+405.358285699" watchObservedRunningTime="2026-03-14 09:03:40.371193402 +0000 UTC m=+405.359433777" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.920035 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:03:40 crc kubenswrapper[4687]: E0314 09:03:40.920465 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" containerName="controller-manager" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.920547 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" containerName="controller-manager" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.920716 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="541fbea6-1d05-4713-b587-f601c80e24b0" containerName="controller-manager" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.921131 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.924125 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.924195 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.924240 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.924351 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.924400 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.927301 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.935507 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:03:40 crc kubenswrapper[4687]: I0314 09:03:40.943990 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.036202 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.036326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnr24\" (UniqueName: \"kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.036480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.036674 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.036728 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.138312 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.138516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.138556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.138611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.138644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnr24\" (UniqueName: \"kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.139570 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.140232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.140433 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.147800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.164300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnr24\" (UniqueName: \"kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24\") pod \"controller-manager-5dc78d8bc9-hjr2j\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.245102 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.682374 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.717300 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:03:41 crc kubenswrapper[4687]: I0314 09:03:41.925355 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 09:03:42 crc kubenswrapper[4687]: I0314 09:03:42.370684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" event={"ID":"f32c6a34-90ab-4c7e-b2f1-e975daa294c9","Type":"ContainerStarted","Data":"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e"} Mar 14 09:03:42 crc kubenswrapper[4687]: I0314 09:03:42.370747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" event={"ID":"f32c6a34-90ab-4c7e-b2f1-e975daa294c9","Type":"ContainerStarted","Data":"9b091202bb46486e7c4a4f991fabd8e7a6029d1e0c01cd2a5c3ab8f05fbb60fe"} Mar 14 09:03:42 crc kubenswrapper[4687]: I0314 09:03:42.391709 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" podStartSLOduration=9.39168565 podStartE2EDuration="9.39168565s" podCreationTimestamp="2026-03-14 09:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:42.389799543 +0000 UTC m=+407.378039918" watchObservedRunningTime="2026-03-14 09:03:42.39168565 +0000 UTC m=+407.379926025" Mar 14 09:03:43 crc kubenswrapper[4687]: I0314 09:03:43.377927 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:43 crc kubenswrapper[4687]: I0314 09:03:43.383867 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:03:47 crc kubenswrapper[4687]: I0314 09:03:47.944090 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:47 crc kubenswrapper[4687]: I0314 09:03:47.944766 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d661aacbb1cf24ca7c2e150ee3ffa47c0e5d20907db4beaa5617d1466cb69ef7" gracePeriod=5 Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.433772 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.434297 4687 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d661aacbb1cf24ca7c2e150ee3ffa47c0e5d20907db4beaa5617d1466cb69ef7" exitCode=137 Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.535308 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.535431 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596649 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596791 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596823 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.596910 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.597113 4687 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.597129 4687 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.597141 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.597151 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.603834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.699692 4687 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.743211 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.743464 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.754316 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.754376 4687 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1b6e9ed5-ef6d-4c33-a9d9-cec373a1ff8c" Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.758390 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:53 crc kubenswrapper[4687]: I0314 09:03:53.758434 4687 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1b6e9ed5-ef6d-4c33-a9d9-cec373a1ff8c" Mar 14 09:03:54 crc kubenswrapper[4687]: I0314 09:03:54.441119 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 09:03:54 crc kubenswrapper[4687]: I0314 09:03:54.441555 4687 scope.go:117] "RemoveContainer" containerID="d661aacbb1cf24ca7c2e150ee3ffa47c0e5d20907db4beaa5617d1466cb69ef7" Mar 14 09:03:54 crc kubenswrapper[4687]: I0314 09:03:54.441677 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.133799 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557984-xqgvk"] Mar 14 09:04:00 crc kubenswrapper[4687]: E0314 09:04:00.134537 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.134554 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.134671 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.135095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.137163 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.137576 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.137738 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.142153 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-xqgvk"] Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.174144 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdjj\" (UniqueName: \"kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj\") pod \"auto-csr-approver-29557984-xqgvk\" (UID: \"6ef16e0e-a8bf-4a55-823b-cecd4bd00831\") " pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.275364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdjj\" (UniqueName: \"kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj\") pod \"auto-csr-approver-29557984-xqgvk\" (UID: \"6ef16e0e-a8bf-4a55-823b-cecd4bd00831\") " pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.293807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdjj\" (UniqueName: \"kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj\") pod \"auto-csr-approver-29557984-xqgvk\" (UID: \"6ef16e0e-a8bf-4a55-823b-cecd4bd00831\") " pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.450947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:00 crc kubenswrapper[4687]: I0314 09:04:00.858199 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-xqgvk"] Mar 14 09:04:01 crc kubenswrapper[4687]: I0314 09:04:01.489566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" event={"ID":"6ef16e0e-a8bf-4a55-823b-cecd4bd00831","Type":"ContainerStarted","Data":"510038e1bc15cecef82ff374bdc358c46016292ab8863000e4e430ac037de41f"} Mar 14 09:04:02 crc kubenswrapper[4687]: I0314 09:04:02.498272 4687 generic.go:334] "Generic (PLEG): container finished" podID="6ef16e0e-a8bf-4a55-823b-cecd4bd00831" containerID="67898b02166c9e69d6854a87aa266c2fca6e507f9730673d224f3f214d6c36a3" exitCode=0 Mar 14 09:04:02 crc kubenswrapper[4687]: I0314 09:04:02.498399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" event={"ID":"6ef16e0e-a8bf-4a55-823b-cecd4bd00831","Type":"ContainerDied","Data":"67898b02166c9e69d6854a87aa266c2fca6e507f9730673d224f3f214d6c36a3"} Mar 14 09:04:03 crc kubenswrapper[4687]: I0314 09:04:03.855177 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:03 crc kubenswrapper[4687]: I0314 09:04:03.925937 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdjj\" (UniqueName: \"kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj\") pod \"6ef16e0e-a8bf-4a55-823b-cecd4bd00831\" (UID: \"6ef16e0e-a8bf-4a55-823b-cecd4bd00831\") " Mar 14 09:04:03 crc kubenswrapper[4687]: I0314 09:04:03.930981 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj" (OuterVolumeSpecName: "kube-api-access-ltdjj") pod "6ef16e0e-a8bf-4a55-823b-cecd4bd00831" (UID: "6ef16e0e-a8bf-4a55-823b-cecd4bd00831"). InnerVolumeSpecName "kube-api-access-ltdjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:04 crc kubenswrapper[4687]: I0314 09:04:04.026935 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdjj\" (UniqueName: \"kubernetes.io/projected/6ef16e0e-a8bf-4a55-823b-cecd4bd00831-kube-api-access-ltdjj\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:04 crc kubenswrapper[4687]: I0314 09:04:04.510041 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" event={"ID":"6ef16e0e-a8bf-4a55-823b-cecd4bd00831","Type":"ContainerDied","Data":"510038e1bc15cecef82ff374bdc358c46016292ab8863000e4e430ac037de41f"} Mar 14 09:04:04 crc kubenswrapper[4687]: I0314 09:04:04.510357 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510038e1bc15cecef82ff374bdc358c46016292ab8863000e4e430ac037de41f" Mar 14 09:04:04 crc kubenswrapper[4687]: I0314 09:04:04.510106 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-xqgvk" Mar 14 09:04:09 crc kubenswrapper[4687]: I0314 09:04:09.976050 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:04:09 crc kubenswrapper[4687]: I0314 09:04:09.976879 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8f9h8" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="registry-server" containerID="cri-o://c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29" gracePeriod=2 Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.175319 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.175623 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chn6l" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="registry-server" containerID="cri-o://371e5a3d2d04faccd040f043b3bf6716feb4d5ceacc35504ee162404861ad529" gracePeriod=2 Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.454482 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.505840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xvd5\" (UniqueName: \"kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5\") pod \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.505976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities\") pod \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.507176 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities" (OuterVolumeSpecName: "utilities") pod "6f9f2a8a-59b8-4803-976f-d23c1d6de630" (UID: "6f9f2a8a-59b8-4803-976f-d23c1d6de630"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.507290 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content\") pod \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\" (UID: \"6f9f2a8a-59b8-4803-976f-d23c1d6de630\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.508917 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.511879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5" (OuterVolumeSpecName: "kube-api-access-9xvd5") pod "6f9f2a8a-59b8-4803-976f-d23c1d6de630" (UID: "6f9f2a8a-59b8-4803-976f-d23c1d6de630"). InnerVolumeSpecName "kube-api-access-9xvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.550033 4687 generic.go:334] "Generic (PLEG): container finished" podID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerID="c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29" exitCode=0 Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.550196 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f9h8" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.550683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerDied","Data":"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29"} Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.550732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f9h8" event={"ID":"6f9f2a8a-59b8-4803-976f-d23c1d6de630","Type":"ContainerDied","Data":"58350fed0d7a8dc8b9a35a9a8f218cf411ebc27db43bcd393a52b97a7c63debe"} Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.550757 4687 scope.go:117] "RemoveContainer" containerID="c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.553556 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerID="371e5a3d2d04faccd040f043b3bf6716feb4d5ceacc35504ee162404861ad529" exitCode=0 Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.553623 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerDied","Data":"371e5a3d2d04faccd040f043b3bf6716feb4d5ceacc35504ee162404861ad529"} Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.557747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f9f2a8a-59b8-4803-976f-d23c1d6de630" (UID: "6f9f2a8a-59b8-4803-976f-d23c1d6de630"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.568590 4687 scope.go:117] "RemoveContainer" containerID="6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.586528 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.589389 4687 scope.go:117] "RemoveContainer" containerID="3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.602293 4687 scope.go:117] "RemoveContainer" containerID="c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29" Mar 14 09:04:10 crc kubenswrapper[4687]: E0314 09:04:10.602850 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29\": container with ID starting with c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29 not found: ID does not exist" containerID="c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.602889 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29"} err="failed to get container status \"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29\": rpc error: code = NotFound desc = could not find container \"c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29\": container with ID starting with c0c5d9a39397f7e6f53b44f3eddc1c69dd47fe90896f2e0e220af0a05794db29 not found: ID does not exist" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.602914 4687 scope.go:117] "RemoveContainer" containerID="6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065" Mar 14 09:04:10 crc kubenswrapper[4687]: E0314 09:04:10.603163 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065\": container with ID starting with 6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065 not found: ID does not exist" containerID="6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.603199 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065"} err="failed to get container status \"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065\": rpc error: code = NotFound desc = could not find container \"6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065\": container with ID starting with 6f65be6c0cccf656cf53bb01035d47e7cf29c091b6e4c47ebdd91d94a775b065 not found: ID does not exist" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.603219 4687 scope.go:117] "RemoveContainer" containerID="3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a" Mar 14 09:04:10 crc kubenswrapper[4687]: E0314 09:04:10.603520 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a\": container with ID starting with 3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a not found: ID does not exist" containerID="3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.603581 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a"} err="failed to get container status \"3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a\": rpc error: code = NotFound desc = could not find container \"3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a\": container with ID starting with 3e6d6a817421eb1d049d18c0fb932a974749c37ec3e80eafff3a00108c0f024a not found: ID does not exist" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.610472 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities\") pod \"b6b25e7f-bec3-4142-a347-886777f6a1c2\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.610563 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tngrz\" (UniqueName: \"kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz\") pod \"b6b25e7f-bec3-4142-a347-886777f6a1c2\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.610708 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content\") pod \"b6b25e7f-bec3-4142-a347-886777f6a1c2\" (UID: \"b6b25e7f-bec3-4142-a347-886777f6a1c2\") " Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.611035 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9f2a8a-59b8-4803-976f-d23c1d6de630-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.611059 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xvd5\" (UniqueName: \"kubernetes.io/projected/6f9f2a8a-59b8-4803-976f-d23c1d6de630-kube-api-access-9xvd5\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.611852 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities" (OuterVolumeSpecName: "utilities") pod "b6b25e7f-bec3-4142-a347-886777f6a1c2" (UID: "b6b25e7f-bec3-4142-a347-886777f6a1c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.614408 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz" (OuterVolumeSpecName: "kube-api-access-tngrz") pod "b6b25e7f-bec3-4142-a347-886777f6a1c2" (UID: "b6b25e7f-bec3-4142-a347-886777f6a1c2"). InnerVolumeSpecName "kube-api-access-tngrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.662014 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b25e7f-bec3-4142-a347-886777f6a1c2" (UID: "b6b25e7f-bec3-4142-a347-886777f6a1c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.712324 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.712414 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tngrz\" (UniqueName: \"kubernetes.io/projected/b6b25e7f-bec3-4142-a347-886777f6a1c2-kube-api-access-tngrz\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.712425 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b25e7f-bec3-4142-a347-886777f6a1c2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.877543 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:04:10 crc kubenswrapper[4687]: I0314 09:04:10.893950 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8f9h8"] Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.563155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chn6l" event={"ID":"b6b25e7f-bec3-4142-a347-886777f6a1c2","Type":"ContainerDied","Data":"de790bce55ba676db93e85eb0f757ab724b40ab30b6faf19b579f156103f20e1"} Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.563243 4687 scope.go:117] "RemoveContainer" containerID="371e5a3d2d04faccd040f043b3bf6716feb4d5ceacc35504ee162404861ad529" Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.564546 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chn6l" Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.592302 4687 scope.go:117] "RemoveContainer" containerID="5554fbf72d5640ab333841bf0c1ef3c08b9fcdf16cb2ea40b08e8dc0c2e10d93" Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.599649 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.605162 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chn6l"] Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.630312 4687 scope.go:117] "RemoveContainer" containerID="735dd163fdb008fbc19c5a7d867182a1331afe790b256f2be93f299ffa0f5432" Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.744428 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" path="/var/lib/kubelet/pods/6f9f2a8a-59b8-4803-976f-d23c1d6de630/volumes" Mar 14 09:04:11 crc kubenswrapper[4687]: I0314 09:04:11.745153 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" path="/var/lib/kubelet/pods/b6b25e7f-bec3-4142-a347-886777f6a1c2/volumes" Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.379283 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.379580 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p62jj" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="registry-server" containerID="cri-o://e8dde221a5869c5f7d4dc028c02b9a58db25b746c4091a299151105017abba5f" gracePeriod=2 Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.572384 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerID="e8dde221a5869c5f7d4dc028c02b9a58db25b746c4091a299151105017abba5f" exitCode=0 Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.572450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerDied","Data":"e8dde221a5869c5f7d4dc028c02b9a58db25b746c4091a299151105017abba5f"} Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.859393 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.939230 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities\") pod \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.939299 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content\") pod \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.939442 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5pbr\" (UniqueName: \"kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr\") pod \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\" (UID: \"d3d7d663-f8a3-477b-9487-3e284e3cdf6b\") " Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.940060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities" (OuterVolumeSpecName: "utilities") pod "d3d7d663-f8a3-477b-9487-3e284e3cdf6b" (UID: "d3d7d663-f8a3-477b-9487-3e284e3cdf6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.954488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr" (OuterVolumeSpecName: "kube-api-access-c5pbr") pod "d3d7d663-f8a3-477b-9487-3e284e3cdf6b" (UID: "d3d7d663-f8a3-477b-9487-3e284e3cdf6b"). InnerVolumeSpecName "kube-api-access-c5pbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:12 crc kubenswrapper[4687]: I0314 09:04:12.976655 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d7d663-f8a3-477b-9487-3e284e3cdf6b" (UID: "d3d7d663-f8a3-477b-9487-3e284e3cdf6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.040467 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.040526 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5pbr\" (UniqueName: \"kubernetes.io/projected/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-kube-api-access-c5pbr\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.040541 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d7d663-f8a3-477b-9487-3e284e3cdf6b-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.284178 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.284444 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" podUID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" containerName="route-controller-manager" containerID="cri-o://582bd188c1ffc71be26e7b79bcd0fdbfb7ef4c1db775914536421556047be21a" gracePeriod=30 Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.582928 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62jj" event={"ID":"d3d7d663-f8a3-477b-9487-3e284e3cdf6b","Type":"ContainerDied","Data":"54e839a3df74d162c4090492bc16e97b6a8c8c63c0545f9e4ae822de4a50b9c4"} Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.582977 4687 scope.go:117] "RemoveContainer" containerID="e8dde221a5869c5f7d4dc028c02b9a58db25b746c4091a299151105017abba5f" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.583064 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62jj" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.588954 4687 generic.go:334] "Generic (PLEG): container finished" podID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" containerID="582bd188c1ffc71be26e7b79bcd0fdbfb7ef4c1db775914536421556047be21a" exitCode=0 Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.588999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" event={"ID":"18b4e7de-fa68-48a6-bac4-3320e494bc9b","Type":"ContainerDied","Data":"582bd188c1ffc71be26e7b79bcd0fdbfb7ef4c1db775914536421556047be21a"} Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.609285 4687 scope.go:117] "RemoveContainer" containerID="f32b87d3a5c7aba54844f1e513bf19d02331e97936066a810b20455665755ff9" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.623013 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.629374 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62jj"] Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.645622 4687 scope.go:117] "RemoveContainer" containerID="afa4426fbd1e9694c2dae251b3aa68a30546ad7f689fd2d2a0c575fed16dd4aa" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.744221 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" path="/var/lib/kubelet/pods/d3d7d663-f8a3-477b-9487-3e284e3cdf6b/volumes" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.749839 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.849498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcchg\" (UniqueName: \"kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg\") pod \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.849808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert\") pod \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.849896 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config\") pod \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.850037 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca\") pod \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\" (UID: \"18b4e7de-fa68-48a6-bac4-3320e494bc9b\") " Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.850452 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config" (OuterVolumeSpecName: "config") pod "18b4e7de-fa68-48a6-bac4-3320e494bc9b" (UID: "18b4e7de-fa68-48a6-bac4-3320e494bc9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.850491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca" (OuterVolumeSpecName: "client-ca") pod "18b4e7de-fa68-48a6-bac4-3320e494bc9b" (UID: "18b4e7de-fa68-48a6-bac4-3320e494bc9b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.856541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18b4e7de-fa68-48a6-bac4-3320e494bc9b" (UID: "18b4e7de-fa68-48a6-bac4-3320e494bc9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.856588 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg" (OuterVolumeSpecName: "kube-api-access-pcchg") pod "18b4e7de-fa68-48a6-bac4-3320e494bc9b" (UID: "18b4e7de-fa68-48a6-bac4-3320e494bc9b"). InnerVolumeSpecName "kube-api-access-pcchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.952162 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.952205 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcchg\" (UniqueName: \"kubernetes.io/projected/18b4e7de-fa68-48a6-bac4-3320e494bc9b-kube-api-access-pcchg\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.952221 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e7de-fa68-48a6-bac4-3320e494bc9b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:13 crc kubenswrapper[4687]: I0314 09:04:13.952232 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e7de-fa68-48a6-bac4-3320e494bc9b-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.596298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" event={"ID":"18b4e7de-fa68-48a6-bac4-3320e494bc9b","Type":"ContainerDied","Data":"cbedd906aaec679eec4e51329870b6a02eaf5a50ab314c53fffd6552d21fb55c"} Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.596372 4687 scope.go:117] "RemoveContainer" containerID="582bd188c1ffc71be26e7b79bcd0fdbfb7ef4c1db775914536421556047be21a" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.596384 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.623462 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.632308 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d54789467-cqnmr"] Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948109 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5"] Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948355 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948370 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948384 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948391 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948405 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948413 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948425 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948433 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948444 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948451 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948460 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948468 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948498 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948507 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948520 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef16e0e-a8bf-4a55-823b-cecd4bd00831" containerName="oc" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948528 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef16e0e-a8bf-4a55-823b-cecd4bd00831" containerName="oc" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948537 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948544 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="extract-utilities" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948556 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948564 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="extract-content" Mar 14 09:04:14 crc kubenswrapper[4687]: E0314 09:04:14.948573 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948581 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948683 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b25e7f-bec3-4142-a347-886777f6a1c2" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948696 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d7d663-f8a3-477b-9487-3e284e3cdf6b" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948706 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948718 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef16e0e-a8bf-4a55-823b-cecd4bd00831" containerName="oc" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.948727 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9f2a8a-59b8-4803-976f-d23c1d6de630" containerName="registry-server" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.949182 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.951133 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.951395 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.951592 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.951793 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.951888 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.954289 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:04:14 crc kubenswrapper[4687]: I0314 09:04:14.990592 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5"] Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.063903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-config\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.064026 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064dd319-efea-448e-b4f3-c5efa1c15eba-serving-cert\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.064092 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8hrx\" (UniqueName: \"kubernetes.io/projected/064dd319-efea-448e-b4f3-c5efa1c15eba-kube-api-access-r8hrx\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.064126 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-client-ca\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.165472 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064dd319-efea-448e-b4f3-c5efa1c15eba-serving-cert\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.165530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8hrx\" (UniqueName: \"kubernetes.io/projected/064dd319-efea-448e-b4f3-c5efa1c15eba-kube-api-access-r8hrx\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.165563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-client-ca\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.165634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-config\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.166711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-client-ca\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.166947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064dd319-efea-448e-b4f3-c5efa1c15eba-config\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.171100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064dd319-efea-448e-b4f3-c5efa1c15eba-serving-cert\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.185715 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8hrx\" (UniqueName: \"kubernetes.io/projected/064dd319-efea-448e-b4f3-c5efa1c15eba-kube-api-access-r8hrx\") pod \"route-controller-manager-9c7484f5-582m5\" (UID: \"064dd319-efea-448e-b4f3-c5efa1c15eba\") " pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.262647 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.638644 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5"] Mar 14 09:04:15 crc kubenswrapper[4687]: I0314 09:04:15.744616 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b4e7de-fa68-48a6-bac4-3320e494bc9b" path="/var/lib/kubelet/pods/18b4e7de-fa68-48a6-bac4-3320e494bc9b/volumes" Mar 14 09:04:16 crc kubenswrapper[4687]: I0314 09:04:16.609688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" event={"ID":"064dd319-efea-448e-b4f3-c5efa1c15eba","Type":"ContainerStarted","Data":"a57bd99deac84fb456f3d1d2a468888126addba8c93662fdf9265a5489e01d51"} Mar 14 09:04:16 crc kubenswrapper[4687]: I0314 09:04:16.609730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" event={"ID":"064dd319-efea-448e-b4f3-c5efa1c15eba","Type":"ContainerStarted","Data":"99c3d9b85f536701ae7f82af487ceb95010596fd9db48b41b7c8b69a47beb907"} Mar 14 09:04:16 crc kubenswrapper[4687]: I0314 09:04:16.609966 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:16 crc kubenswrapper[4687]: I0314 09:04:16.614920 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" Mar 14 09:04:16 crc kubenswrapper[4687]: I0314 09:04:16.629061 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9c7484f5-582m5" podStartSLOduration=3.629044313 podStartE2EDuration="3.629044313s" podCreationTimestamp="2026-03-14 09:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:16.627825534 +0000 UTC m=+441.616065919" watchObservedRunningTime="2026-03-14 09:04:16.629044313 +0000 UTC m=+441.617284688" Mar 14 09:04:24 crc kubenswrapper[4687]: I0314 09:04:24.111827 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:04:24 crc kubenswrapper[4687]: I0314 09:04:24.112985 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.618610 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qh848"] Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.619764 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.630778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qh848"] Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751175 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-bound-sa-token\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trk4d\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-kube-api-access-trk4d\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751537 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751679 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-tls\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.751886 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-certificates\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.752040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-trusted-ca\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.752089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.772905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-tls\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-certificates\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-trusted-ca\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-bound-sa-token\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853366 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trk4d\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-kube-api-access-trk4d\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.853659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.854816 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-certificates\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.854869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-trusted-ca\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.858847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.892290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-registry-tls\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.909233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-bound-sa-token\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.914250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trk4d\" (UniqueName: \"kubernetes.io/projected/cf12388e-6ffa-4119-b1e1-7d0abb56ae57-kube-api-access-trk4d\") pod \"image-registry-66df7c8f76-qh848\" (UID: \"cf12388e-6ffa-4119-b1e1-7d0abb56ae57\") " pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:29 crc kubenswrapper[4687]: I0314 09:04:29.937116 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:30 crc kubenswrapper[4687]: I0314 09:04:30.362654 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qh848"] Mar 14 09:04:30 crc kubenswrapper[4687]: I0314 09:04:30.681899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" event={"ID":"cf12388e-6ffa-4119-b1e1-7d0abb56ae57","Type":"ContainerStarted","Data":"0f28640cf56d260aa33bb91d0423b247c411da6e304a64a5e4566841d8ed0f4c"} Mar 14 09:04:30 crc kubenswrapper[4687]: I0314 09:04:30.681946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" event={"ID":"cf12388e-6ffa-4119-b1e1-7d0abb56ae57","Type":"ContainerStarted","Data":"7f76bf86e23ac1a1dbe3a6d7a1906e19080cb8ed2029d1f9d3ff54dc680cb81c"} Mar 14 09:04:30 crc kubenswrapper[4687]: I0314 09:04:30.682038 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.242596 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" podStartSLOduration=4.242575884 podStartE2EDuration="4.242575884s" podCreationTimestamp="2026-03-14 09:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:30.703206111 +0000 UTC m=+455.691446486" watchObservedRunningTime="2026-03-14 09:04:33.242575884 +0000 UTC m=+458.230816259" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.245100 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.245512 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" podUID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" containerName="controller-manager" containerID="cri-o://d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e" gracePeriod=30 Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.593750 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.698389 4687 generic.go:334] "Generic (PLEG): container finished" podID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" containerID="d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e" exitCode=0 Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.698439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" event={"ID":"f32c6a34-90ab-4c7e-b2f1-e975daa294c9","Type":"ContainerDied","Data":"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e"} Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.698460 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.698513 4687 scope.go:117] "RemoveContainer" containerID="d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.698500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j" event={"ID":"f32c6a34-90ab-4c7e-b2f1-e975daa294c9","Type":"ContainerDied","Data":"9b091202bb46486e7c4a4f991fabd8e7a6029d1e0c01cd2a5c3ab8f05fbb60fe"} Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.702951 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca\") pod \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config\") pod \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703052 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnr24\" (UniqueName: \"kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24\") pod \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles\") pod \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703118 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert\") pod \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\" (UID: \"f32c6a34-90ab-4c7e-b2f1-e975daa294c9\") " Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703735 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f32c6a34-90ab-4c7e-b2f1-e975daa294c9" (UID: "f32c6a34-90ab-4c7e-b2f1-e975daa294c9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "f32c6a34-90ab-4c7e-b2f1-e975daa294c9" (UID: "f32c6a34-90ab-4c7e-b2f1-e975daa294c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.703820 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config" (OuterVolumeSpecName: "config") pod "f32c6a34-90ab-4c7e-b2f1-e975daa294c9" (UID: "f32c6a34-90ab-4c7e-b2f1-e975daa294c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.707568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f32c6a34-90ab-4c7e-b2f1-e975daa294c9" (UID: "f32c6a34-90ab-4c7e-b2f1-e975daa294c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.707762 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24" (OuterVolumeSpecName: "kube-api-access-qnr24") pod "f32c6a34-90ab-4c7e-b2f1-e975daa294c9" (UID: "f32c6a34-90ab-4c7e-b2f1-e975daa294c9"). InnerVolumeSpecName "kube-api-access-qnr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.712999 4687 scope.go:117] "RemoveContainer" containerID="d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e" Mar 14 09:04:33 crc kubenswrapper[4687]: E0314 09:04:33.713368 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e\": container with ID starting with d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e not found: ID does not exist" containerID="d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.713429 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e"} err="failed to get container status \"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e\": rpc error: code = NotFound desc = could not find container \"d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e\": container with ID starting with d612eebc776998ae5597ab63a290d6e81842838c3208f92e4c0de2f8c240d77e not found: ID does not exist" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.804372 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.804406 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.804416 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnr24\" (UniqueName: \"kubernetes.io/projected/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-kube-api-access-qnr24\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.804428 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:33 crc kubenswrapper[4687]: I0314 09:04:33.804450 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32c6a34-90ab-4c7e-b2f1-e975daa294c9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.016927 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.020139 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dc78d8bc9-hjr2j"] Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.962865 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7dfd478b97-vljht"] Mar 14 09:04:34 crc kubenswrapper[4687]: E0314 09:04:34.964155 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" containerName="controller-manager" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.964274 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" containerName="controller-manager" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.964499 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" containerName="controller-manager" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.964998 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.966866 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.967051 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.967309 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.968454 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.968533 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.968631 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.979627 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfd478b97-vljht"] Mar 14 09:04:34 crc kubenswrapper[4687]: I0314 09:04:34.992508 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.036941 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70db39ea-759d-4b16-9b1a-fb73845c6a69-serving-cert\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.037025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-proxy-ca-bundles\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.037206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mrd\" (UniqueName: \"kubernetes.io/projected/70db39ea-759d-4b16-9b1a-fb73845c6a69-kube-api-access-q4mrd\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.037265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-config\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.037312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-client-ca\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.138783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-proxy-ca-bundles\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.138832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mrd\" (UniqueName: \"kubernetes.io/projected/70db39ea-759d-4b16-9b1a-fb73845c6a69-kube-api-access-q4mrd\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.138856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-config\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.138875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-client-ca\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.138909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70db39ea-759d-4b16-9b1a-fb73845c6a69-serving-cert\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.140594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-client-ca\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.140748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-config\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.140783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70db39ea-759d-4b16-9b1a-fb73845c6a69-proxy-ca-bundles\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.145528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70db39ea-759d-4b16-9b1a-fb73845c6a69-serving-cert\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.154035 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mrd\" (UniqueName: \"kubernetes.io/projected/70db39ea-759d-4b16-9b1a-fb73845c6a69-kube-api-access-q4mrd\") pod \"controller-manager-7dfd478b97-vljht\" (UID: \"70db39ea-759d-4b16-9b1a-fb73845c6a69\") " pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.293977 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.500661 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfd478b97-vljht"] Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.715496 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" event={"ID":"70db39ea-759d-4b16-9b1a-fb73845c6a69","Type":"ContainerStarted","Data":"5b9ee3f64ec7eb7f21111d09a27154b7f3f68e8b0ddb633101ed6ae995f29dd3"} Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.715584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" event={"ID":"70db39ea-759d-4b16-9b1a-fb73845c6a69","Type":"ContainerStarted","Data":"e757d790f759f74705a09e0e9bea630c7e771f4818bb3dea3f52e5c88c541c8d"} Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.716004 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.721575 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.733228 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7dfd478b97-vljht" podStartSLOduration=2.733210038 podStartE2EDuration="2.733210038s" podCreationTimestamp="2026-03-14 09:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:35.729581948 +0000 UTC m=+460.717822323" watchObservedRunningTime="2026-03-14 09:04:35.733210038 +0000 UTC m=+460.721450413" Mar 14 09:04:35 crc kubenswrapper[4687]: I0314 09:04:35.742996 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32c6a34-90ab-4c7e-b2f1-e975daa294c9" path="/var/lib/kubelet/pods/f32c6a34-90ab-4c7e-b2f1-e975daa294c9/volumes" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.279461 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.280187 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sgxm6" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="registry-server" containerID="cri-o://c1b72f08b1ee5d7f7ed6bd748141dd20cb33168b8e40905b999160e2083fd4a2" gracePeriod=30 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.296540 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.296880 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8w67" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="registry-server" containerID="cri-o://f235b8efd57ed0ab26d8877ee0af72d8f40734d85d0f17fd129b36e9b7856222" gracePeriod=30 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.300040 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.300280 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" containerID="cri-o://2f4e0d3a7a2b0e3d2ef11d1e6180835a030f363dbfdd285f3822a3f693d6d617" gracePeriod=30 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.305098 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.305324 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x58t9" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="registry-server" containerID="cri-o://fd9f6f300484e3d631bbfe2f40013a8d55fd5a407f2dbf170b6d0a09ae43002a" gracePeriod=30 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.317544 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.317863 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qp9c4" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="registry-server" containerID="cri-o://d561ee793aff1f750be1237a59ae4cb40b9bae9cc2188082ffb128f88bd17b13" gracePeriod=30 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.325164 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpppm"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.325801 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.333787 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpppm"] Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.430533 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.430587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.430619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-kube-api-access-bkdbz\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.532474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.532528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.532557 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-kube-api-access-bkdbz\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.534567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.543511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.550359 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f-kube-api-access-bkdbz\") pod \"marketplace-operator-79b997595-lpppm\" (UID: \"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.768593 4687 generic.go:334] "Generic (PLEG): container finished" podID="764f93b4-9c3b-400d-b508-4534689e51a7" containerID="fd9f6f300484e3d631bbfe2f40013a8d55fd5a407f2dbf170b6d0a09ae43002a" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.768709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerDied","Data":"fd9f6f300484e3d631bbfe2f40013a8d55fd5a407f2dbf170b6d0a09ae43002a"} Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.775770 4687 generic.go:334] "Generic (PLEG): container finished" podID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerID="2f4e0d3a7a2b0e3d2ef11d1e6180835a030f363dbfdd285f3822a3f693d6d617" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.775825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerDied","Data":"2f4e0d3a7a2b0e3d2ef11d1e6180835a030f363dbfdd285f3822a3f693d6d617"} Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.775904 4687 scope.go:117] "RemoveContainer" containerID="c3a8a25485abc852dcbab8b6a0eea628b95b7b17401b499efeb63a28552f3367" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.779977 4687 generic.go:334] "Generic (PLEG): container finished" podID="89f679b4-c725-4c83-9248-e1a292d851bf" containerID="d561ee793aff1f750be1237a59ae4cb40b9bae9cc2188082ffb128f88bd17b13" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.780043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerDied","Data":"d561ee793aff1f750be1237a59ae4cb40b9bae9cc2188082ffb128f88bd17b13"} Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.783967 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8b47421-912e-4faa-b3ed-33881459d76e" containerID="c1b72f08b1ee5d7f7ed6bd748141dd20cb33168b8e40905b999160e2083fd4a2" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.784007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerDied","Data":"c1b72f08b1ee5d7f7ed6bd748141dd20cb33168b8e40905b999160e2083fd4a2"} Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.786010 4687 generic.go:334] "Generic (PLEG): container finished" podID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerID="f235b8efd57ed0ab26d8877ee0af72d8f40734d85d0f17fd129b36e9b7856222" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.786035 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerDied","Data":"f235b8efd57ed0ab26d8877ee0af72d8f40734d85d0f17fd129b36e9b7856222"} Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.850119 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.851938 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.941726 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:04:42 crc kubenswrapper[4687]: I0314 09:04:42.973680 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.046063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content\") pod \"89f679b4-c725-4c83-9248-e1a292d851bf\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.046133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content\") pod \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.046188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities\") pod \"89f679b4-c725-4c83-9248-e1a292d851bf\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.048661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk9p4\" (UniqueName: \"kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4\") pod \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.048739 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities\") pod \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\" (UID: \"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.048825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8bw9\" (UniqueName: \"kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9\") pod \"89f679b4-c725-4c83-9248-e1a292d851bf\" (UID: \"89f679b4-c725-4c83-9248-e1a292d851bf\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.049385 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities" (OuterVolumeSpecName: "utilities") pod "89f679b4-c725-4c83-9248-e1a292d851bf" (UID: "89f679b4-c725-4c83-9248-e1a292d851bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.049762 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.059493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities" (OuterVolumeSpecName: "utilities") pod "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" (UID: "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.067112 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9" (OuterVolumeSpecName: "kube-api-access-j8bw9") pod "89f679b4-c725-4c83-9248-e1a292d851bf" (UID: "89f679b4-c725-4c83-9248-e1a292d851bf"). InnerVolumeSpecName "kube-api-access-j8bw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.070526 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4" (OuterVolumeSpecName: "kube-api-access-bk9p4") pod "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" (UID: "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f"). InnerVolumeSpecName "kube-api-access-bk9p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.076382 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.118573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" (UID: "1cbd8ddb-1c88-4838-bce4-982b8c78ab4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.150424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvscv\" (UniqueName: \"kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv\") pod \"696fbdef-0b69-41a2-bb11-df22a4f753af\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.150476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics\") pod \"696fbdef-0b69-41a2-bb11-df22a4f753af\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.150891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca\") pod \"696fbdef-0b69-41a2-bb11-df22a4f753af\" (UID: \"696fbdef-0b69-41a2-bb11-df22a4f753af\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.151186 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.151205 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk9p4\" (UniqueName: \"kubernetes.io/projected/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-kube-api-access-bk9p4\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.151219 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.151231 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8bw9\" (UniqueName: \"kubernetes.io/projected/89f679b4-c725-4c83-9248-e1a292d851bf-kube-api-access-j8bw9\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.151832 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "696fbdef-0b69-41a2-bb11-df22a4f753af" (UID: "696fbdef-0b69-41a2-bb11-df22a4f753af"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.153785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv" (OuterVolumeSpecName: "kube-api-access-gvscv") pod "696fbdef-0b69-41a2-bb11-df22a4f753af" (UID: "696fbdef-0b69-41a2-bb11-df22a4f753af"). InnerVolumeSpecName "kube-api-access-gvscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.155568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "696fbdef-0b69-41a2-bb11-df22a4f753af" (UID: "696fbdef-0b69-41a2-bb11-df22a4f753af"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.199618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89f679b4-c725-4c83-9248-e1a292d851bf" (UID: "89f679b4-c725-4c83-9248-e1a292d851bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content\") pod \"764f93b4-9c3b-400d-b508-4534689e51a7\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7wm9\" (UniqueName: \"kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9\") pod \"764f93b4-9c3b-400d-b508-4534689e51a7\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities\") pod \"764f93b4-9c3b-400d-b508-4534689e51a7\" (UID: \"764f93b4-9c3b-400d-b508-4534689e51a7\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252726 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvscv\" (UniqueName: \"kubernetes.io/projected/696fbdef-0b69-41a2-bb11-df22a4f753af-kube-api-access-gvscv\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252744 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252758 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89f679b4-c725-4c83-9248-e1a292d851bf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.252769 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/696fbdef-0b69-41a2-bb11-df22a4f753af-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.253540 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities" (OuterVolumeSpecName: "utilities") pod "764f93b4-9c3b-400d-b508-4534689e51a7" (UID: "764f93b4-9c3b-400d-b508-4534689e51a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.257343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9" (OuterVolumeSpecName: "kube-api-access-g7wm9") pod "764f93b4-9c3b-400d-b508-4534689e51a7" (UID: "764f93b4-9c3b-400d-b508-4534689e51a7"). InnerVolumeSpecName "kube-api-access-g7wm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.277153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "764f93b4-9c3b-400d-b508-4534689e51a7" (UID: "764f93b4-9c3b-400d-b508-4534689e51a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.314153 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.354813 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.354857 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7wm9\" (UniqueName: \"kubernetes.io/projected/764f93b4-9c3b-400d-b508-4534689e51a7-kube-api-access-g7wm9\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.354873 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764f93b4-9c3b-400d-b508-4534689e51a7-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.418482 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpppm"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.455305 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content\") pod \"a8b47421-912e-4faa-b3ed-33881459d76e\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.455388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities\") pod \"a8b47421-912e-4faa-b3ed-33881459d76e\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.455491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9nqj\" (UniqueName: \"kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj\") pod \"a8b47421-912e-4faa-b3ed-33881459d76e\" (UID: \"a8b47421-912e-4faa-b3ed-33881459d76e\") " Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.456371 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities" (OuterVolumeSpecName: "utilities") pod "a8b47421-912e-4faa-b3ed-33881459d76e" (UID: "a8b47421-912e-4faa-b3ed-33881459d76e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.458224 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj" (OuterVolumeSpecName: "kube-api-access-b9nqj") pod "a8b47421-912e-4faa-b3ed-33881459d76e" (UID: "a8b47421-912e-4faa-b3ed-33881459d76e"). InnerVolumeSpecName "kube-api-access-b9nqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.503235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8b47421-912e-4faa-b3ed-33881459d76e" (UID: "a8b47421-912e-4faa-b3ed-33881459d76e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.557257 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9nqj\" (UniqueName: \"kubernetes.io/projected/a8b47421-912e-4faa-b3ed-33881459d76e-kube-api-access-b9nqj\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.557296 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.557307 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b47421-912e-4faa-b3ed-33881459d76e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.792842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" event={"ID":"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f","Type":"ContainerStarted","Data":"947b597e142902abf7435be3d6e247c84448d256f297a035f24f1087b079d4fa"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.792895 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" event={"ID":"70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f","Type":"ContainerStarted","Data":"d2f327b857aae24c850470521002d2a7854b8fcc3242bffa4489c09d8cb09b05"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.792994 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.795278 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lpppm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.75:8080/healthz\": dial tcp 10.217.0.75:8080: connect: connection refused" start-of-body= Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.795346 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" podUID="70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.75:8080/healthz\": dial tcp 10.217.0.75:8080: connect: connection refused" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.796021 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp9c4" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.796047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp9c4" event={"ID":"89f679b4-c725-4c83-9248-e1a292d851bf","Type":"ContainerDied","Data":"548af851d665e364485c5eb898f77d2c20c1894dc8cbebd6132366353c2a596b"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.796093 4687 scope.go:117] "RemoveContainer" containerID="d561ee793aff1f750be1237a59ae4cb40b9bae9cc2188082ffb128f88bd17b13" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.800125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgxm6" event={"ID":"a8b47421-912e-4faa-b3ed-33881459d76e","Type":"ContainerDied","Data":"ccb56d6724577f904f0a379ba7554835c661867cf10932e833e326c2d71df98c"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.800142 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgxm6" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.803913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8w67" event={"ID":"1cbd8ddb-1c88-4838-bce4-982b8c78ab4f","Type":"ContainerDied","Data":"c18c6a3cdfa88597fb2847756c6c9963ad0c24f9910b389653621c66bd362892"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.804039 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8w67" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.819412 4687 scope.go:117] "RemoveContainer" containerID="c3f003e916e3ed65918fc08c52012a6e17c85bc539fe7d05b2299b55110ec9e8" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.821171 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" podStartSLOduration=1.821149187 podStartE2EDuration="1.821149187s" podCreationTimestamp="2026-03-14 09:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:43.815012387 +0000 UTC m=+468.803252772" watchObservedRunningTime="2026-03-14 09:04:43.821149187 +0000 UTC m=+468.809389562" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.832155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x58t9" event={"ID":"764f93b4-9c3b-400d-b508-4534689e51a7","Type":"ContainerDied","Data":"12593782f52474abff4d3b25fc926ad4dc4c7f84814e0fe6fd7a3ce268b1ec21"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.832320 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x58t9" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.840925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" event={"ID":"696fbdef-0b69-41a2-bb11-df22a4f753af","Type":"ContainerDied","Data":"545b592103aadceb27b15a8be10b75dc46c6edfa3a74be023e2ff9315636b2af"} Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.841025 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tnnb6" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.846485 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.852608 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sgxm6"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.856263 4687 scope.go:117] "RemoveContainer" containerID="0c0150c1f9716f5336eeeee6a581decd40abc057b5e9b1bf425f950d6e747418" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.859018 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.865426 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qp9c4"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.869516 4687 scope.go:117] "RemoveContainer" containerID="c1b72f08b1ee5d7f7ed6bd748141dd20cb33168b8e40905b999160e2083fd4a2" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.874517 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.882364 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8w67"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.890272 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.890790 4687 scope.go:117] "RemoveContainer" containerID="375c35f6d2a11a0fb507d5b55a2a937abb04fecc58f3076413bc94be5f0fd1f0" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.898609 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x58t9"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.900366 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.903238 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tnnb6"] Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.905131 4687 scope.go:117] "RemoveContainer" containerID="b744b0ad5ed7fb85f660ac210df854dae5c13ef9e5507fa90d2b47abc4bc50f7" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.918179 4687 scope.go:117] "RemoveContainer" containerID="f235b8efd57ed0ab26d8877ee0af72d8f40734d85d0f17fd129b36e9b7856222" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.932755 4687 scope.go:117] "RemoveContainer" containerID="89ad9bbd9730d1cb1f3d16967eb08c9b5c829850687440b54bee056d3b0ed5dd" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.946906 4687 scope.go:117] "RemoveContainer" containerID="33967deb856c952b5b9cc08c9b33c1b75dbaae8f415ca1565aac07df9d60a6ba" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.959584 4687 scope.go:117] "RemoveContainer" containerID="fd9f6f300484e3d631bbfe2f40013a8d55fd5a407f2dbf170b6d0a09ae43002a" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.972787 4687 scope.go:117] "RemoveContainer" containerID="52168436c557c2c957d7c933b54bb3a9328aa4dafbeef6f26977d04b37824820" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.985750 4687 scope.go:117] "RemoveContainer" containerID="464471f7c17e607e6a5db8c7ddbe6ce3def70e5fc5cdbdf8839a3b14b1016f3c" Mar 14 09:04:43 crc kubenswrapper[4687]: I0314 09:04:43.996504 4687 scope.go:117] "RemoveContainer" containerID="2f4e0d3a7a2b0e3d2ef11d1e6180835a030f363dbfdd285f3822a3f693d6d617" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499495 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcd7"] Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499739 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499757 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499769 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499776 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499795 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499805 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499818 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499826 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499838 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499879 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499888 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499896 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499906 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499913 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499922 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499929 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499941 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499948 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499956 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499963 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499976 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.499983 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.499992 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500000 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="extract-content" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.500010 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500019 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="extract-utilities" Mar 14 09:04:44 crc kubenswrapper[4687]: E0314 09:04:44.500028 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500035 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500136 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500149 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500159 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500169 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500182 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" containerName="registry-server" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500383 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" containerName="marketplace-operator" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.500991 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.503153 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.509300 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcd7"] Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.668757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-catalog-content\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.668865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjxf\" (UniqueName: \"kubernetes.io/projected/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-kube-api-access-fjjxf\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.668968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-utilities\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.712656 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fdzgw"] Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.715164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.717369 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.720169 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fdzgw"] Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.770150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-catalog-content\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.770257 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjxf\" (UniqueName: \"kubernetes.io/projected/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-kube-api-access-fjjxf\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.770290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-utilities\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.771059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-utilities\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.771058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-catalog-content\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.795691 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjxf\" (UniqueName: \"kubernetes.io/projected/fff21ca7-1a0b-4a6d-84c2-2605625b4e62-kube-api-access-fjjxf\") pod \"redhat-marketplace-hbcd7\" (UID: \"fff21ca7-1a0b-4a6d-84c2-2605625b4e62\") " pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.845859 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.860003 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lpppm" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.871969 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jb6\" (UniqueName: \"kubernetes.io/projected/c235d724-9cf5-4fb3-92fe-2da6bb33abed-kube-api-access-96jb6\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.872085 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-utilities\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.872158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-catalog-content\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.972914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jb6\" (UniqueName: \"kubernetes.io/projected/c235d724-9cf5-4fb3-92fe-2da6bb33abed-kube-api-access-96jb6\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.973206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-utilities\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.973263 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-catalog-content\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.973726 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-utilities\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:44 crc kubenswrapper[4687]: I0314 09:04:44.973790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c235d724-9cf5-4fb3-92fe-2da6bb33abed-catalog-content\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.011429 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jb6\" (UniqueName: \"kubernetes.io/projected/c235d724-9cf5-4fb3-92fe-2da6bb33abed-kube-api-access-96jb6\") pod \"redhat-operators-fdzgw\" (UID: \"c235d724-9cf5-4fb3-92fe-2da6bb33abed\") " pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.047365 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.238810 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcd7"] Mar 14 09:04:45 crc kubenswrapper[4687]: W0314 09:04:45.244312 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfff21ca7_1a0b_4a6d_84c2_2605625b4e62.slice/crio-0babe73d186974ca3f91b26f770190d83e7fc77a5c537f6c9b3592f3481283ff WatchSource:0}: Error finding container 0babe73d186974ca3f91b26f770190d83e7fc77a5c537f6c9b3592f3481283ff: Status 404 returned error can't find the container with id 0babe73d186974ca3f91b26f770190d83e7fc77a5c537f6c9b3592f3481283ff Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.409742 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fdzgw"] Mar 14 09:04:45 crc kubenswrapper[4687]: W0314 09:04:45.414747 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc235d724_9cf5_4fb3_92fe_2da6bb33abed.slice/crio-807143150eac5626e5ef7b7e1b89d2f1db9fbc223ee240fdf39c64d0200e013d WatchSource:0}: Error finding container 807143150eac5626e5ef7b7e1b89d2f1db9fbc223ee240fdf39c64d0200e013d: Status 404 returned error can't find the container with id 807143150eac5626e5ef7b7e1b89d2f1db9fbc223ee240fdf39c64d0200e013d Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.750586 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbd8ddb-1c88-4838-bce4-982b8c78ab4f" path="/var/lib/kubelet/pods/1cbd8ddb-1c88-4838-bce4-982b8c78ab4f/volumes" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.752060 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696fbdef-0b69-41a2-bb11-df22a4f753af" path="/var/lib/kubelet/pods/696fbdef-0b69-41a2-bb11-df22a4f753af/volumes" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.753067 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764f93b4-9c3b-400d-b508-4534689e51a7" path="/var/lib/kubelet/pods/764f93b4-9c3b-400d-b508-4534689e51a7/volumes" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.755411 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f679b4-c725-4c83-9248-e1a292d851bf" path="/var/lib/kubelet/pods/89f679b4-c725-4c83-9248-e1a292d851bf/volumes" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.757522 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b47421-912e-4faa-b3ed-33881459d76e" path="/var/lib/kubelet/pods/a8b47421-912e-4faa-b3ed-33881459d76e/volumes" Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.861853 4687 generic.go:334] "Generic (PLEG): container finished" podID="c235d724-9cf5-4fb3-92fe-2da6bb33abed" containerID="b6136bd790beef75a2686b4c24d3ec333f7e1d52dadd587ec67937a3fe268687" exitCode=0 Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.861941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdzgw" event={"ID":"c235d724-9cf5-4fb3-92fe-2da6bb33abed","Type":"ContainerDied","Data":"b6136bd790beef75a2686b4c24d3ec333f7e1d52dadd587ec67937a3fe268687"} Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.861977 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdzgw" event={"ID":"c235d724-9cf5-4fb3-92fe-2da6bb33abed","Type":"ContainerStarted","Data":"807143150eac5626e5ef7b7e1b89d2f1db9fbc223ee240fdf39c64d0200e013d"} Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.864641 4687 generic.go:334] "Generic (PLEG): container finished" podID="fff21ca7-1a0b-4a6d-84c2-2605625b4e62" containerID="3c7119f354c4fc08326693291886e8bfd74b66c1360b1031d5a68fc09b77171c" exitCode=0 Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.865328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcd7" event={"ID":"fff21ca7-1a0b-4a6d-84c2-2605625b4e62","Type":"ContainerDied","Data":"3c7119f354c4fc08326693291886e8bfd74b66c1360b1031d5a68fc09b77171c"} Mar 14 09:04:45 crc kubenswrapper[4687]: I0314 09:04:45.865372 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcd7" event={"ID":"fff21ca7-1a0b-4a6d-84c2-2605625b4e62","Type":"ContainerStarted","Data":"0babe73d186974ca3f91b26f770190d83e7fc77a5c537f6c9b3592f3481283ff"} Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.873122 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdzgw" event={"ID":"c235d724-9cf5-4fb3-92fe-2da6bb33abed","Type":"ContainerStarted","Data":"aacfd4039c9c10d639bd0473274c24fb07bbe62c65e51c9046b115add5fb854a"} Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.876565 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcd7" event={"ID":"fff21ca7-1a0b-4a6d-84c2-2605625b4e62","Type":"ContainerStarted","Data":"edba72cb5419742318c651a06f5e30ea9366dbc9089eccf65e0dfa1ede1429d2"} Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.894534 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xr8fx"] Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.895488 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.900941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:04:46 crc kubenswrapper[4687]: I0314 09:04:46.907139 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xr8fx"] Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.005162 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-utilities\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.005251 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-catalog-content\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.005292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njpt\" (UniqueName: \"kubernetes.io/projected/cd8a2a9c-9fbb-417e-9428-503e7899305c-kube-api-access-6njpt\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.095942 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4cn6"] Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.097037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.103278 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.106405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-utilities\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.106513 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-catalog-content\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.106559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njpt\" (UniqueName: \"kubernetes.io/projected/cd8a2a9c-9fbb-417e-9428-503e7899305c-kube-api-access-6njpt\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.107865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-utilities\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.107920 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8a2a9c-9fbb-417e-9428-503e7899305c-catalog-content\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.141856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njpt\" (UniqueName: \"kubernetes.io/projected/cd8a2a9c-9fbb-417e-9428-503e7899305c-kube-api-access-6njpt\") pod \"certified-operators-xr8fx\" (UID: \"cd8a2a9c-9fbb-417e-9428-503e7899305c\") " pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.149542 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4cn6"] Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.207852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-utilities\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.207918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gk9\" (UniqueName: \"kubernetes.io/projected/3bbf9d41-c0f1-426b-bf77-578011dacfd5-kube-api-access-77gk9\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.208128 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-catalog-content\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.262009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.309696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gk9\" (UniqueName: \"kubernetes.io/projected/3bbf9d41-c0f1-426b-bf77-578011dacfd5-kube-api-access-77gk9\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.309766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-catalog-content\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.309804 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-utilities\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.310182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-utilities\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.310265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf9d41-c0f1-426b-bf77-578011dacfd5-catalog-content\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.325777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gk9\" (UniqueName: \"kubernetes.io/projected/3bbf9d41-c0f1-426b-bf77-578011dacfd5-kube-api-access-77gk9\") pod \"community-operators-g4cn6\" (UID: \"3bbf9d41-c0f1-426b-bf77-578011dacfd5\") " pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.411455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.634746 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xr8fx"] Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.791969 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4cn6"] Mar 14 09:04:47 crc kubenswrapper[4687]: W0314 09:04:47.864147 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bbf9d41_c0f1_426b_bf77_578011dacfd5.slice/crio-c3122c3f3381a87f9732c889e7666dcbb16ed6001c1919f774e1ade6483cc29f WatchSource:0}: Error finding container c3122c3f3381a87f9732c889e7666dcbb16ed6001c1919f774e1ade6483cc29f: Status 404 returned error can't find the container with id c3122c3f3381a87f9732c889e7666dcbb16ed6001c1919f774e1ade6483cc29f Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.881177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4cn6" event={"ID":"3bbf9d41-c0f1-426b-bf77-578011dacfd5","Type":"ContainerStarted","Data":"c3122c3f3381a87f9732c889e7666dcbb16ed6001c1919f774e1ade6483cc29f"} Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.883733 4687 generic.go:334] "Generic (PLEG): container finished" podID="c235d724-9cf5-4fb3-92fe-2da6bb33abed" containerID="aacfd4039c9c10d639bd0473274c24fb07bbe62c65e51c9046b115add5fb854a" exitCode=0 Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.883849 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdzgw" event={"ID":"c235d724-9cf5-4fb3-92fe-2da6bb33abed","Type":"ContainerDied","Data":"aacfd4039c9c10d639bd0473274c24fb07bbe62c65e51c9046b115add5fb854a"} Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.886600 4687 generic.go:334] "Generic (PLEG): container finished" podID="fff21ca7-1a0b-4a6d-84c2-2605625b4e62" containerID="edba72cb5419742318c651a06f5e30ea9366dbc9089eccf65e0dfa1ede1429d2" exitCode=0 Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.886666 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcd7" event={"ID":"fff21ca7-1a0b-4a6d-84c2-2605625b4e62","Type":"ContainerDied","Data":"edba72cb5419742318c651a06f5e30ea9366dbc9089eccf65e0dfa1ede1429d2"} Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.889528 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd8a2a9c-9fbb-417e-9428-503e7899305c" containerID="98d9535bb7bd1df709d9a9bc619bcc62d32dee697a9a7b1d92e81e96bc583992" exitCode=0 Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.889557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8fx" event={"ID":"cd8a2a9c-9fbb-417e-9428-503e7899305c","Type":"ContainerDied","Data":"98d9535bb7bd1df709d9a9bc619bcc62d32dee697a9a7b1d92e81e96bc583992"} Mar 14 09:04:47 crc kubenswrapper[4687]: I0314 09:04:47.889579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8fx" event={"ID":"cd8a2a9c-9fbb-417e-9428-503e7899305c","Type":"ContainerStarted","Data":"427f4b63fb776c253356403e3d73b6436529ced46bc3134f43db1456a2925b0a"} Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.898168 4687 generic.go:334] "Generic (PLEG): container finished" podID="3bbf9d41-c0f1-426b-bf77-578011dacfd5" containerID="aa87842e846d3d74f0756517d7adf923de0eaa314183c5ddd8cd9afe67f77dd9" exitCode=0 Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.898266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4cn6" event={"ID":"3bbf9d41-c0f1-426b-bf77-578011dacfd5","Type":"ContainerDied","Data":"aa87842e846d3d74f0756517d7adf923de0eaa314183c5ddd8cd9afe67f77dd9"} Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.903117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdzgw" event={"ID":"c235d724-9cf5-4fb3-92fe-2da6bb33abed","Type":"ContainerStarted","Data":"3bb2e00d3dee87805609e8aa3d61295d420d44afb9cf398b8e73633d238f30a0"} Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.909780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcd7" event={"ID":"fff21ca7-1a0b-4a6d-84c2-2605625b4e62","Type":"ContainerStarted","Data":"d56ef278d1ec9b14d89e0df30f21c39d95e245dfade10c8a0d0389ddb84e4b42"} Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.913065 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd8a2a9c-9fbb-417e-9428-503e7899305c" containerID="25d95273ea8e0de69b45161c72ca4c226b94f95f1f5d00eb27d8f8d8fc3e25b8" exitCode=0 Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.913117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8fx" event={"ID":"cd8a2a9c-9fbb-417e-9428-503e7899305c","Type":"ContainerDied","Data":"25d95273ea8e0de69b45161c72ca4c226b94f95f1f5d00eb27d8f8d8fc3e25b8"} Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.973158 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fdzgw" podStartSLOduration=2.532958594 podStartE2EDuration="4.973132235s" podCreationTimestamp="2026-03-14 09:04:44 +0000 UTC" firstStartedPulling="2026-03-14 09:04:45.864038792 +0000 UTC m=+470.852279177" lastFinishedPulling="2026-03-14 09:04:48.304212443 +0000 UTC m=+473.292452818" observedRunningTime="2026-03-14 09:04:48.968363748 +0000 UTC m=+473.956604123" watchObservedRunningTime="2026-03-14 09:04:48.973132235 +0000 UTC m=+473.961372610" Mar 14 09:04:48 crc kubenswrapper[4687]: I0314 09:04:48.993070 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hbcd7" podStartSLOduration=2.285714969 podStartE2EDuration="4.993044775s" podCreationTimestamp="2026-03-14 09:04:44 +0000 UTC" firstStartedPulling="2026-03-14 09:04:45.879375559 +0000 UTC m=+470.867615924" lastFinishedPulling="2026-03-14 09:04:48.586705355 +0000 UTC m=+473.574945730" observedRunningTime="2026-03-14 09:04:48.98794936 +0000 UTC m=+473.976189765" watchObservedRunningTime="2026-03-14 09:04:48.993044775 +0000 UTC m=+473.981285140" Mar 14 09:04:49 crc kubenswrapper[4687]: I0314 09:04:49.919647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4cn6" event={"ID":"3bbf9d41-c0f1-426b-bf77-578011dacfd5","Type":"ContainerStarted","Data":"4b79038f6dc229f44175db58821eda6d9ddf460e3acc9c20d85b4ad251482ee2"} Mar 14 09:04:49 crc kubenswrapper[4687]: I0314 09:04:49.921536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8fx" event={"ID":"cd8a2a9c-9fbb-417e-9428-503e7899305c","Type":"ContainerStarted","Data":"1b6dca3b0d3bbe2e0ec83287ccbaa32d6782dab8643a3dd51fb7d9302b098a97"} Mar 14 09:04:49 crc kubenswrapper[4687]: I0314 09:04:49.945435 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qh848" Mar 14 09:04:49 crc kubenswrapper[4687]: I0314 09:04:49.961738 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xr8fx" podStartSLOduration=2.515499042 podStartE2EDuration="3.961717763s" podCreationTimestamp="2026-03-14 09:04:46 +0000 UTC" firstStartedPulling="2026-03-14 09:04:47.894862879 +0000 UTC m=+472.883103254" lastFinishedPulling="2026-03-14 09:04:49.3410816 +0000 UTC m=+474.329321975" observedRunningTime="2026-03-14 09:04:49.95914544 +0000 UTC m=+474.947385815" watchObservedRunningTime="2026-03-14 09:04:49.961717763 +0000 UTC m=+474.949958148" Mar 14 09:04:50 crc kubenswrapper[4687]: I0314 09:04:50.016132 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:04:50 crc kubenswrapper[4687]: I0314 09:04:50.929853 4687 generic.go:334] "Generic (PLEG): container finished" podID="3bbf9d41-c0f1-426b-bf77-578011dacfd5" containerID="4b79038f6dc229f44175db58821eda6d9ddf460e3acc9c20d85b4ad251482ee2" exitCode=0 Mar 14 09:04:50 crc kubenswrapper[4687]: I0314 09:04:50.929949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4cn6" event={"ID":"3bbf9d41-c0f1-426b-bf77-578011dacfd5","Type":"ContainerDied","Data":"4b79038f6dc229f44175db58821eda6d9ddf460e3acc9c20d85b4ad251482ee2"} Mar 14 09:04:51 crc kubenswrapper[4687]: I0314 09:04:51.936487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4cn6" event={"ID":"3bbf9d41-c0f1-426b-bf77-578011dacfd5","Type":"ContainerStarted","Data":"2548e87eab2f62ed59ed0f05a18e494379b6fb66e6608446134a4e2438cfdb84"} Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.111630 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.111952 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.846566 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.847040 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.888208 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:54 crc kubenswrapper[4687]: I0314 09:04:54.905577 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4cn6" podStartSLOduration=5.308801003 podStartE2EDuration="7.905562068s" podCreationTimestamp="2026-03-14 09:04:47 +0000 UTC" firstStartedPulling="2026-03-14 09:04:48.900782804 +0000 UTC m=+473.889023179" lastFinishedPulling="2026-03-14 09:04:51.497543869 +0000 UTC m=+476.485784244" observedRunningTime="2026-03-14 09:04:51.956184546 +0000 UTC m=+476.944424921" watchObservedRunningTime="2026-03-14 09:04:54.905562068 +0000 UTC m=+479.893802443" Mar 14 09:04:55 crc kubenswrapper[4687]: I0314 09:04:55.006459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hbcd7" Mar 14 09:04:55 crc kubenswrapper[4687]: I0314 09:04:55.048381 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:55 crc kubenswrapper[4687]: I0314 09:04:55.048647 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:04:56 crc kubenswrapper[4687]: I0314 09:04:56.083548 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fdzgw" podUID="c235d724-9cf5-4fb3-92fe-2da6bb33abed" containerName="registry-server" probeResult="failure" output=< Mar 14 09:04:56 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:04:56 crc kubenswrapper[4687]: > Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.262324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.262703 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.332577 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.412710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.412775 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:57 crc kubenswrapper[4687]: I0314 09:04:57.468430 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:04:58 crc kubenswrapper[4687]: I0314 09:04:58.010181 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xr8fx" Mar 14 09:04:58 crc kubenswrapper[4687]: I0314 09:04:58.021808 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4cn6" Mar 14 09:05:05 crc kubenswrapper[4687]: I0314 09:05:05.095191 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:05:05 crc kubenswrapper[4687]: I0314 09:05:05.143382 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fdzgw" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.057137 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" podUID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" containerName="registry" containerID="cri-o://080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415" gracePeriod=30 Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.444598 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561178 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561247 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561315 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561522 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg8pn\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561762 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561835 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561896 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.561934 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted\") pod \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\" (UID: \"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b\") " Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.575196 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.575606 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.575890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.576088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn" (OuterVolumeSpecName: "kube-api-access-pg8pn") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "kube-api-access-pg8pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.578988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.583304 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.586603 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.589868 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" (UID: "6eed2cb6-ab8f-4447-9a76-9d238ba48d9b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663636 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg8pn\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-kube-api-access-pg8pn\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663870 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663880 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663888 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663899 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663907 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:15 crc kubenswrapper[4687]: I0314 09:05:15.663915 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.078518 4687 generic.go:334] "Generic (PLEG): container finished" podID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" containerID="080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415" exitCode=0 Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.078558 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" event={"ID":"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b","Type":"ContainerDied","Data":"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415"} Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.078590 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" event={"ID":"6eed2cb6-ab8f-4447-9a76-9d238ba48d9b","Type":"ContainerDied","Data":"f4a30de14f08b3751bbb465b107348b3d04183fb44a3ba4d66c20d8ab705ef61"} Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.078624 4687 scope.go:117] "RemoveContainer" containerID="080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415" Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.078676 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gjtxn" Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.103516 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.103915 4687 scope.go:117] "RemoveContainer" containerID="080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415" Mar 14 09:05:16 crc kubenswrapper[4687]: E0314 09:05:16.105216 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415\": container with ID starting with 080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415 not found: ID does not exist" containerID="080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415" Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.105331 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415"} err="failed to get container status \"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415\": rpc error: code = NotFound desc = could not find container \"080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415\": container with ID starting with 080d4d2323ba8183618cbd42e062a18c8cd970132ae0ba85d3d224b91573c415 not found: ID does not exist" Mar 14 09:05:16 crc kubenswrapper[4687]: I0314 09:05:16.108059 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gjtxn"] Mar 14 09:05:17 crc kubenswrapper[4687]: I0314 09:05:17.744029 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" path="/var/lib/kubelet/pods/6eed2cb6-ab8f-4447-9a76-9d238ba48d9b/volumes" Mar 14 09:05:24 crc kubenswrapper[4687]: I0314 09:05:24.111972 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:05:24 crc kubenswrapper[4687]: I0314 09:05:24.112260 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:05:24 crc kubenswrapper[4687]: I0314 09:05:24.112306 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:05:24 crc kubenswrapper[4687]: I0314 09:05:24.112889 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:05:24 crc kubenswrapper[4687]: I0314 09:05:24.112952 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead" gracePeriod=600 Mar 14 09:05:25 crc kubenswrapper[4687]: I0314 09:05:25.131608 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead" exitCode=0 Mar 14 09:05:25 crc kubenswrapper[4687]: I0314 09:05:25.131678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead"} Mar 14 09:05:25 crc kubenswrapper[4687]: I0314 09:05:25.132125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab"} Mar 14 09:05:25 crc kubenswrapper[4687]: I0314 09:05:25.132143 4687 scope.go:117] "RemoveContainer" containerID="9bb306640c2891648ebdd64e06366096115f3bc41305509a7016177ef6a1385e" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.131806 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557986-2xszk"] Mar 14 09:06:00 crc kubenswrapper[4687]: E0314 09:06:00.133518 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" containerName="registry" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.133606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" containerName="registry" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.133804 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eed2cb6-ab8f-4447-9a76-9d238ba48d9b" containerName="registry" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.134361 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.136858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.137345 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.138587 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-2xszk"] Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.139783 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.270958 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxphw\" (UniqueName: \"kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw\") pod \"auto-csr-approver-29557986-2xszk\" (UID: \"029b6f09-ba0c-427c-ac9d-5092188dab67\") " pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.372887 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxphw\" (UniqueName: \"kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw\") pod \"auto-csr-approver-29557986-2xszk\" (UID: \"029b6f09-ba0c-427c-ac9d-5092188dab67\") " pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.391316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxphw\" (UniqueName: \"kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw\") pod \"auto-csr-approver-29557986-2xszk\" (UID: \"029b6f09-ba0c-427c-ac9d-5092188dab67\") " pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.448825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.919289 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-2xszk"] Mar 14 09:06:00 crc kubenswrapper[4687]: I0314 09:06:00.934972 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:06:01 crc kubenswrapper[4687]: I0314 09:06:01.348195 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-2xszk" event={"ID":"029b6f09-ba0c-427c-ac9d-5092188dab67","Type":"ContainerStarted","Data":"36733cac3938cc269858ec9dd9b8dbe62e1bfaee792eb8b82a3815fe118139ff"} Mar 14 09:06:02 crc kubenswrapper[4687]: I0314 09:06:02.358278 4687 generic.go:334] "Generic (PLEG): container finished" podID="029b6f09-ba0c-427c-ac9d-5092188dab67" containerID="10a0c41f3ba3501f71eb365070ba9a695d5d9842227e624c309db1300b33b10b" exitCode=0 Mar 14 09:06:02 crc kubenswrapper[4687]: I0314 09:06:02.358420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-2xszk" event={"ID":"029b6f09-ba0c-427c-ac9d-5092188dab67","Type":"ContainerDied","Data":"10a0c41f3ba3501f71eb365070ba9a695d5d9842227e624c309db1300b33b10b"} Mar 14 09:06:03 crc kubenswrapper[4687]: I0314 09:06:03.587794 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:03 crc kubenswrapper[4687]: I0314 09:06:03.718281 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxphw\" (UniqueName: \"kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw\") pod \"029b6f09-ba0c-427c-ac9d-5092188dab67\" (UID: \"029b6f09-ba0c-427c-ac9d-5092188dab67\") " Mar 14 09:06:03 crc kubenswrapper[4687]: I0314 09:06:03.724141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw" (OuterVolumeSpecName: "kube-api-access-xxphw") pod "029b6f09-ba0c-427c-ac9d-5092188dab67" (UID: "029b6f09-ba0c-427c-ac9d-5092188dab67"). InnerVolumeSpecName "kube-api-access-xxphw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:06:03 crc kubenswrapper[4687]: I0314 09:06:03.820205 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxphw\" (UniqueName: \"kubernetes.io/projected/029b6f09-ba0c-427c-ac9d-5092188dab67-kube-api-access-xxphw\") on node \"crc\" DevicePath \"\"" Mar 14 09:06:04 crc kubenswrapper[4687]: I0314 09:06:04.371640 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-2xszk" event={"ID":"029b6f09-ba0c-427c-ac9d-5092188dab67","Type":"ContainerDied","Data":"36733cac3938cc269858ec9dd9b8dbe62e1bfaee792eb8b82a3815fe118139ff"} Mar 14 09:06:04 crc kubenswrapper[4687]: I0314 09:06:04.371908 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36733cac3938cc269858ec9dd9b8dbe62e1bfaee792eb8b82a3815fe118139ff" Mar 14 09:06:04 crc kubenswrapper[4687]: I0314 09:06:04.371702 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-2xszk" Mar 14 09:06:04 crc kubenswrapper[4687]: I0314 09:06:04.636581 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-mm8hd"] Mar 14 09:06:04 crc kubenswrapper[4687]: I0314 09:06:04.639992 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-mm8hd"] Mar 14 09:06:05 crc kubenswrapper[4687]: I0314 09:06:05.743003 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f93f38-eae8-494e-b879-4240e2712982" path="/var/lib/kubelet/pods/17f93f38-eae8-494e-b879-4240e2712982/volumes" Mar 14 09:07:24 crc kubenswrapper[4687]: I0314 09:07:24.111236 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:07:24 crc kubenswrapper[4687]: I0314 09:07:24.111991 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:07:54 crc kubenswrapper[4687]: I0314 09:07:54.111796 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:07:54 crc kubenswrapper[4687]: I0314 09:07:54.112571 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:07:57 crc kubenswrapper[4687]: I0314 09:07:57.713742 4687 scope.go:117] "RemoveContainer" containerID="db43786bd30daaa83a89bb2fc5fdd3f1f89abb876dd5360c985b47a79dcbfd49" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.158752 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557988-hzkcr"] Mar 14 09:08:00 crc kubenswrapper[4687]: E0314 09:08:00.159619 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="029b6f09-ba0c-427c-ac9d-5092188dab67" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.159646 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="029b6f09-ba0c-427c-ac9d-5092188dab67" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.159806 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="029b6f09-ba0c-427c-ac9d-5092188dab67" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.160708 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.164238 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.164545 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.164570 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.166843 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-hzkcr"] Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.167940 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtw7\" (UniqueName: \"kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7\") pod \"auto-csr-approver-29557988-hzkcr\" (UID: \"452b1e5d-996c-485f-adb0-06fd7f1d38a4\") " pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.268701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtw7\" (UniqueName: \"kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7\") pod \"auto-csr-approver-29557988-hzkcr\" (UID: \"452b1e5d-996c-485f-adb0-06fd7f1d38a4\") " pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.299173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtw7\" (UniqueName: \"kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7\") pod \"auto-csr-approver-29557988-hzkcr\" (UID: \"452b1e5d-996c-485f-adb0-06fd7f1d38a4\") " pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.487799 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:00 crc kubenswrapper[4687]: I0314 09:08:00.744400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-hzkcr"] Mar 14 09:08:01 crc kubenswrapper[4687]: I0314 09:08:01.130677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" event={"ID":"452b1e5d-996c-485f-adb0-06fd7f1d38a4","Type":"ContainerStarted","Data":"b164bc0526fd976180cc4eec5e16b6b096e543195663febd37f10f23c7e648dd"} Mar 14 09:08:02 crc kubenswrapper[4687]: I0314 09:08:02.139389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" event={"ID":"452b1e5d-996c-485f-adb0-06fd7f1d38a4","Type":"ContainerStarted","Data":"31fa318c1fe23d4b5173f74856449bdb9d27216fa59dd85b1640dc9fe3ea41d8"} Mar 14 09:08:02 crc kubenswrapper[4687]: I0314 09:08:02.163392 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" podStartSLOduration=1.202050102 podStartE2EDuration="2.163374442s" podCreationTimestamp="2026-03-14 09:08:00 +0000 UTC" firstStartedPulling="2026-03-14 09:08:00.75021746 +0000 UTC m=+665.738457875" lastFinishedPulling="2026-03-14 09:08:01.71154181 +0000 UTC m=+666.699782215" observedRunningTime="2026-03-14 09:08:02.161110087 +0000 UTC m=+667.149350462" watchObservedRunningTime="2026-03-14 09:08:02.163374442 +0000 UTC m=+667.151614817" Mar 14 09:08:03 crc kubenswrapper[4687]: I0314 09:08:03.146669 4687 generic.go:334] "Generic (PLEG): container finished" podID="452b1e5d-996c-485f-adb0-06fd7f1d38a4" containerID="31fa318c1fe23d4b5173f74856449bdb9d27216fa59dd85b1640dc9fe3ea41d8" exitCode=0 Mar 14 09:08:03 crc kubenswrapper[4687]: I0314 09:08:03.146715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" event={"ID":"452b1e5d-996c-485f-adb0-06fd7f1d38a4","Type":"ContainerDied","Data":"31fa318c1fe23d4b5173f74856449bdb9d27216fa59dd85b1640dc9fe3ea41d8"} Mar 14 09:08:04 crc kubenswrapper[4687]: I0314 09:08:04.411565 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:04 crc kubenswrapper[4687]: I0314 09:08:04.423561 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvtw7\" (UniqueName: \"kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7\") pod \"452b1e5d-996c-485f-adb0-06fd7f1d38a4\" (UID: \"452b1e5d-996c-485f-adb0-06fd7f1d38a4\") " Mar 14 09:08:04 crc kubenswrapper[4687]: I0314 09:08:04.430873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7" (OuterVolumeSpecName: "kube-api-access-qvtw7") pod "452b1e5d-996c-485f-adb0-06fd7f1d38a4" (UID: "452b1e5d-996c-485f-adb0-06fd7f1d38a4"). InnerVolumeSpecName "kube-api-access-qvtw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:08:04 crc kubenswrapper[4687]: I0314 09:08:04.525184 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvtw7\" (UniqueName: \"kubernetes.io/projected/452b1e5d-996c-485f-adb0-06fd7f1d38a4-kube-api-access-qvtw7\") on node \"crc\" DevicePath \"\"" Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.161675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" event={"ID":"452b1e5d-996c-485f-adb0-06fd7f1d38a4","Type":"ContainerDied","Data":"b164bc0526fd976180cc4eec5e16b6b096e543195663febd37f10f23c7e648dd"} Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.161713 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-hzkcr" Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.161736 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b164bc0526fd976180cc4eec5e16b6b096e543195663febd37f10f23c7e648dd" Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.225753 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-4tcmh"] Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.232384 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-4tcmh"] Mar 14 09:08:05 crc kubenswrapper[4687]: I0314 09:08:05.748743 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7633ff-4aeb-4906-93e2-446a680ea1d2" path="/var/lib/kubelet/pods/ef7633ff-4aeb-4906-93e2-446a680ea1d2/volumes" Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.111864 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.112590 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.112652 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.113436 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.113535 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab" gracePeriod=600 Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.291098 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab" exitCode=0 Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.291152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab"} Mar 14 09:08:24 crc kubenswrapper[4687]: I0314 09:08:24.291680 4687 scope.go:117] "RemoveContainer" containerID="d335920c74431d77b673b87598ba34db7c3e54a8669ad1acf29d111408bc8ead" Mar 14 09:08:25 crc kubenswrapper[4687]: I0314 09:08:25.304804 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749"} Mar 14 09:08:57 crc kubenswrapper[4687]: I0314 09:08:57.790591 4687 scope.go:117] "RemoveContainer" containerID="7a79d55e50cf92c9dde41b9eefebbb69b51c8de5b1cfc0c86b2e37a13e55a4bc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.099233 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6"] Mar 14 09:09:52 crc kubenswrapper[4687]: E0314 09:09:52.099898 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452b1e5d-996c-485f-adb0-06fd7f1d38a4" containerName="oc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.099911 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="452b1e5d-996c-485f-adb0-06fd7f1d38a4" containerName="oc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.099995 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="452b1e5d-996c-485f-adb0-06fd7f1d38a4" containerName="oc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.100345 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.104694 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.104756 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tzlz8" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.105144 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.109041 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-nmcjc"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.109648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-nmcjc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.114692 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-f82l4" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.116265 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.129809 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-nmcjc"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.143772 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sb5j5"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.144617 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.147190 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sb5j5"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.148723 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5wsq5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.292662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8276w\" (UniqueName: \"kubernetes.io/projected/9e984e4c-322e-4396-ab02-532fed35dcb4-kube-api-access-8276w\") pod \"cert-manager-858654f9db-nmcjc\" (UID: \"9e984e4c-322e-4396-ab02-532fed35dcb4\") " pod="cert-manager/cert-manager-858654f9db-nmcjc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.293123 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjq5\" (UniqueName: \"kubernetes.io/projected/1b523b43-4cd6-4b84-8e19-6f1f9b5b313c-kube-api-access-jjjq5\") pod \"cert-manager-cainjector-cf98fcc89-zxfr6\" (UID: \"1b523b43-4cd6-4b84-8e19-6f1f9b5b313c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.293361 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8pd\" (UniqueName: \"kubernetes.io/projected/30bb39e4-9d81-40e6-bac2-7ab9126ed815-kube-api-access-5v8pd\") pod \"cert-manager-webhook-687f57d79b-sb5j5\" (UID: \"30bb39e4-9d81-40e6-bac2-7ab9126ed815\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.394659 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8276w\" (UniqueName: \"kubernetes.io/projected/9e984e4c-322e-4396-ab02-532fed35dcb4-kube-api-access-8276w\") pod \"cert-manager-858654f9db-nmcjc\" (UID: \"9e984e4c-322e-4396-ab02-532fed35dcb4\") " pod="cert-manager/cert-manager-858654f9db-nmcjc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.394728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjq5\" (UniqueName: \"kubernetes.io/projected/1b523b43-4cd6-4b84-8e19-6f1f9b5b313c-kube-api-access-jjjq5\") pod \"cert-manager-cainjector-cf98fcc89-zxfr6\" (UID: \"1b523b43-4cd6-4b84-8e19-6f1f9b5b313c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.394779 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8pd\" (UniqueName: \"kubernetes.io/projected/30bb39e4-9d81-40e6-bac2-7ab9126ed815-kube-api-access-5v8pd\") pod \"cert-manager-webhook-687f57d79b-sb5j5\" (UID: \"30bb39e4-9d81-40e6-bac2-7ab9126ed815\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.412027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjq5\" (UniqueName: \"kubernetes.io/projected/1b523b43-4cd6-4b84-8e19-6f1f9b5b313c-kube-api-access-jjjq5\") pod \"cert-manager-cainjector-cf98fcc89-zxfr6\" (UID: \"1b523b43-4cd6-4b84-8e19-6f1f9b5b313c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.412352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8pd\" (UniqueName: \"kubernetes.io/projected/30bb39e4-9d81-40e6-bac2-7ab9126ed815-kube-api-access-5v8pd\") pod \"cert-manager-webhook-687f57d79b-sb5j5\" (UID: \"30bb39e4-9d81-40e6-bac2-7ab9126ed815\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.414034 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8276w\" (UniqueName: \"kubernetes.io/projected/9e984e4c-322e-4396-ab02-532fed35dcb4-kube-api-access-8276w\") pod \"cert-manager-858654f9db-nmcjc\" (UID: \"9e984e4c-322e-4396-ab02-532fed35dcb4\") " pod="cert-manager/cert-manager-858654f9db-nmcjc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.416096 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.431225 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-nmcjc" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.457366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.636848 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.679200 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-nmcjc"] Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.709689 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sb5j5"] Mar 14 09:09:52 crc kubenswrapper[4687]: W0314 09:09:52.710946 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bb39e4_9d81_40e6_bac2_7ab9126ed815.slice/crio-97f5acca18f6fb12bc7859b33cb7c2094a3b0c4ab039547469665b5040967fdd WatchSource:0}: Error finding container 97f5acca18f6fb12bc7859b33cb7c2094a3b0c4ab039547469665b5040967fdd: Status 404 returned error can't find the container with id 97f5acca18f6fb12bc7859b33cb7c2094a3b0c4ab039547469665b5040967fdd Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.864599 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-nmcjc" event={"ID":"9e984e4c-322e-4396-ab02-532fed35dcb4","Type":"ContainerStarted","Data":"a7b28e754b39dcc27daf90a94e5551412cca4ac87a7cecf486ffcfdd7f9dce85"} Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.865552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" event={"ID":"30bb39e4-9d81-40e6-bac2-7ab9126ed815","Type":"ContainerStarted","Data":"97f5acca18f6fb12bc7859b33cb7c2094a3b0c4ab039547469665b5040967fdd"} Mar 14 09:09:52 crc kubenswrapper[4687]: I0314 09:09:52.866556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" event={"ID":"1b523b43-4cd6-4b84-8e19-6f1f9b5b313c","Type":"ContainerStarted","Data":"0cb2a58e152cf29fce1dc49984d1746619f85ab4904e90650258c66f6244d127"} Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.886660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-nmcjc" event={"ID":"9e984e4c-322e-4396-ab02-532fed35dcb4","Type":"ContainerStarted","Data":"31efce9cbd0d56d2707ba4bd315fd03549823b63203bdbdeb67c350b651181d8"} Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.888167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" event={"ID":"30bb39e4-9d81-40e6-bac2-7ab9126ed815","Type":"ContainerStarted","Data":"504f59c2fb0fa5273514846caf2f6b29442b2e90ea4f037adb683dbf7c58d7c4"} Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.888294 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.889807 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" event={"ID":"1b523b43-4cd6-4b84-8e19-6f1f9b5b313c","Type":"ContainerStarted","Data":"fa086fa6ddf964e41dd8272f9670c049cb03f5e779cb9eac98d392ef46c85ac3"} Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.901912 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-nmcjc" podStartSLOduration=1.2359028300000001 podStartE2EDuration="4.901890383s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:52.691236279 +0000 UTC m=+777.679476654" lastFinishedPulling="2026-03-14 09:09:56.357223832 +0000 UTC m=+781.345464207" observedRunningTime="2026-03-14 09:09:56.901247408 +0000 UTC m=+781.889487783" watchObservedRunningTime="2026-03-14 09:09:56.901890383 +0000 UTC m=+781.890130798" Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.921669 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zxfr6" podStartSLOduration=1.232629331 podStartE2EDuration="4.92164727s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:52.647798469 +0000 UTC m=+777.636038844" lastFinishedPulling="2026-03-14 09:09:56.336816418 +0000 UTC m=+781.325056783" observedRunningTime="2026-03-14 09:09:56.921461286 +0000 UTC m=+781.909701671" watchObservedRunningTime="2026-03-14 09:09:56.92164727 +0000 UTC m=+781.909887665" Mar 14 09:09:56 crc kubenswrapper[4687]: I0314 09:09:56.942962 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" podStartSLOduration=1.305842224 podStartE2EDuration="4.942943775s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:52.71319539 +0000 UTC m=+777.701435785" lastFinishedPulling="2026-03-14 09:09:56.350296961 +0000 UTC m=+781.338537336" observedRunningTime="2026-03-14 09:09:56.938911475 +0000 UTC m=+781.927151850" watchObservedRunningTime="2026-03-14 09:09:56.942943775 +0000 UTC m=+781.931184150" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.127053 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557990-wdnmr"] Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.127955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.130437 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-wdnmr"] Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.130801 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.130930 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.130986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.295771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2l9\" (UniqueName: \"kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9\") pod \"auto-csr-approver-29557990-wdnmr\" (UID: \"0fda40ca-f725-406d-b1d7-60cbb5b3f386\") " pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.396741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m2l9\" (UniqueName: \"kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9\") pod \"auto-csr-approver-29557990-wdnmr\" (UID: \"0fda40ca-f725-406d-b1d7-60cbb5b3f386\") " pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.415600 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m2l9\" (UniqueName: \"kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9\") pod \"auto-csr-approver-29557990-wdnmr\" (UID: \"0fda40ca-f725-406d-b1d7-60cbb5b3f386\") " pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.451481 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.695132 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-wdnmr"] Mar 14 09:10:00 crc kubenswrapper[4687]: W0314 09:10:00.697176 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fda40ca_f725_406d_b1d7_60cbb5b3f386.slice/crio-6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676 WatchSource:0}: Error finding container 6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676: Status 404 returned error can't find the container with id 6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676 Mar 14 09:10:00 crc kubenswrapper[4687]: I0314 09:10:00.915222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" event={"ID":"0fda40ca-f725-406d-b1d7-60cbb5b3f386","Type":"ContainerStarted","Data":"6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676"} Mar 14 09:10:01 crc kubenswrapper[4687]: I0314 09:10:01.920087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" event={"ID":"0fda40ca-f725-406d-b1d7-60cbb5b3f386","Type":"ContainerStarted","Data":"42fcb2f1c4ce84038d17c8741e13c4260d45690fc465f94cb6644e4c178f851b"} Mar 14 09:10:01 crc kubenswrapper[4687]: I0314 09:10:01.935145 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" podStartSLOduration=1.202296927 podStartE2EDuration="1.935127157s" podCreationTimestamp="2026-03-14 09:10:00 +0000 UTC" firstStartedPulling="2026-03-14 09:10:00.700606325 +0000 UTC m=+785.688846740" lastFinishedPulling="2026-03-14 09:10:01.433436565 +0000 UTC m=+786.421676970" observedRunningTime="2026-03-14 09:10:01.93407252 +0000 UTC m=+786.922312895" watchObservedRunningTime="2026-03-14 09:10:01.935127157 +0000 UTC m=+786.923367532" Mar 14 09:10:02 crc kubenswrapper[4687]: I0314 09:10:02.460860 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-sb5j5" Mar 14 09:10:02 crc kubenswrapper[4687]: I0314 09:10:02.928908 4687 generic.go:334] "Generic (PLEG): container finished" podID="0fda40ca-f725-406d-b1d7-60cbb5b3f386" containerID="42fcb2f1c4ce84038d17c8741e13c4260d45690fc465f94cb6644e4c178f851b" exitCode=0 Mar 14 09:10:02 crc kubenswrapper[4687]: I0314 09:10:02.928990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" event={"ID":"0fda40ca-f725-406d-b1d7-60cbb5b3f386","Type":"ContainerDied","Data":"42fcb2f1c4ce84038d17c8741e13c4260d45690fc465f94cb6644e4c178f851b"} Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.273906 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.354083 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m2l9\" (UniqueName: \"kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9\") pod \"0fda40ca-f725-406d-b1d7-60cbb5b3f386\" (UID: \"0fda40ca-f725-406d-b1d7-60cbb5b3f386\") " Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.363085 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9" (OuterVolumeSpecName: "kube-api-access-8m2l9") pod "0fda40ca-f725-406d-b1d7-60cbb5b3f386" (UID: "0fda40ca-f725-406d-b1d7-60cbb5b3f386"). InnerVolumeSpecName "kube-api-access-8m2l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.455829 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m2l9\" (UniqueName: \"kubernetes.io/projected/0fda40ca-f725-406d-b1d7-60cbb5b3f386-kube-api-access-8m2l9\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.945927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" event={"ID":"0fda40ca-f725-406d-b1d7-60cbb5b3f386","Type":"ContainerDied","Data":"6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676"} Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.946033 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1b82f1dfc062d19b567f434a20f8475ba93512e915d4ff99c72d1993f9b676" Mar 14 09:10:04 crc kubenswrapper[4687]: I0314 09:10:04.946126 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-wdnmr" Mar 14 09:10:05 crc kubenswrapper[4687]: I0314 09:10:05.003718 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-xqgvk"] Mar 14 09:10:05 crc kubenswrapper[4687]: I0314 09:10:05.007615 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-xqgvk"] Mar 14 09:10:05 crc kubenswrapper[4687]: I0314 09:10:05.750140 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef16e0e-a8bf-4a55-823b-cecd4bd00831" path="/var/lib/kubelet/pods/6ef16e0e-a8bf-4a55-823b-cecd4bd00831/volumes" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.585305 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jkcr7"] Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.585889 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-controller" containerID="cri-o://77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.585940 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="nbdb" containerID="cri-o://c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.586044 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="northd" containerID="cri-o://b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.586110 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="sbdb" containerID="cri-o://36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.586120 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.586171 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-acl-logging" containerID="cri-o://fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.586217 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-node" containerID="cri-o://73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.648584 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" containerID="cri-o://613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" gracePeriod=30 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.940616 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/3.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.944309 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovn-acl-logging/0.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.944998 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovn-controller/0.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.945668 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.972807 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovnkube-controller/3.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.982936 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovn-acl-logging/0.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.983754 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jkcr7_f7a910c6-8772-4fc8-b557-8ca75235f11c/ovn-controller/0.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984253 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984281 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984293 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984301 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984308 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984347 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" exitCode=0 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984357 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" exitCode=143 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984366 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" exitCode=143 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984514 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984526 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984536 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984543 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984550 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984556 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984563 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984569 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984575 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984596 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984605 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984612 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984618 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984626 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984632 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984639 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984672 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984680 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984688 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984712 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984720 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984729 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984736 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984743 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984751 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984758 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984765 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984771 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984778 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" event={"ID":"f7a910c6-8772-4fc8-b557-8ca75235f11c","Type":"ContainerDied","Data":"4c685799dcfe74d56816d99b16d2d841baa188fc6070066260a981d43b13dfb9"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984801 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984809 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984816 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984824 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984831 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984838 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984845 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984852 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984858 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984865 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.984881 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.985053 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jkcr7" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.989943 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/2.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.990622 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/1.log" Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.990745 4687 generic.go:334] "Generic (PLEG): container finished" podID="732cd580-e685-4b88-b227-b113c4be4c55" containerID="edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0" exitCode=2 Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.990833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerDied","Data":"edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.990920 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e"} Mar 14 09:10:08 crc kubenswrapper[4687]: I0314 09:10:08.992062 4687 scope.go:117] "RemoveContainer" containerID="edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0" Mar 14 09:10:08 crc kubenswrapper[4687]: E0314 09:10:08.994065 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xjjs4_openshift-multus(732cd580-e685-4b88-b227-b113c4be4c55)\"" pod="openshift-multus/multus-xjjs4" podUID="732cd580-e685-4b88-b227-b113c4be4c55" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005167 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sk726"] Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005544 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005565 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005597 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-acl-logging" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-acl-logging" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005619 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="sbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005628 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="sbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005637 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005646 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005653 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005683 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005702 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kubecfg-setup" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005709 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kubecfg-setup" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005721 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fda40ca-f725-406d-b1d7-60cbb5b3f386" containerName="oc" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005728 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fda40ca-f725-406d-b1d7-60cbb5b3f386" containerName="oc" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005762 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="nbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005772 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="nbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005786 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005795 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005806 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005814 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005843 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-node" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005854 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-node" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.005862 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="northd" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.005870 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="northd" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006028 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-node" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006041 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006053 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="sbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006083 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="nbdb" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006092 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006100 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fda40ca-f725-406d-b1d7-60cbb5b3f386" containerName="oc" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006110 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006121 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006130 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006139 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-acl-logging" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006172 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovn-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006180 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="northd" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.006364 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006419 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006631 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.006837 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.006850 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" containerName="ovnkube-controller" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.008699 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024747 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62c4j\" (UniqueName: \"kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024794 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024822 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024852 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024871 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024960 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.024997 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025022 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025074 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025109 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025144 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025158 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025187 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025201 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units\") pod \"f7a910c6-8772-4fc8-b557-8ca75235f11c\" (UID: \"f7a910c6-8772-4fc8-b557-8ca75235f11c\") " Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025316 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-systemd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-env-overrides\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025386 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-etc-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025414 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-node-log\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025461 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14a358f-1317-4410-acc5-d1b938c36f7a-ovn-node-metrics-cert\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-ovn\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025505 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-var-lib-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-config\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025541 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-netns\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025553 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-script-lib\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gh5\" (UniqueName: \"kubernetes.io/projected/f14a358f-1317-4410-acc5-d1b938c36f7a-kube-api-access-79gh5\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025655 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-log-socket\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025684 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-bin\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-kubelet\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025737 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-slash\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025763 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-systemd-units\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.025777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-netd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.026046 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.027502 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.027609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028069 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028122 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028203 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028766 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash" (OuterVolumeSpecName: "host-slash") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.028884 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.029931 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.029972 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket" (OuterVolumeSpecName: "log-socket") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.030003 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.033155 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.033185 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.033199 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log" (OuterVolumeSpecName: "node-log") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.033187 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.033236 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.034419 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.035303 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j" (OuterVolumeSpecName: "kube-api-access-62c4j") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "kube-api-access-62c4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.050752 4687 scope.go:117] "RemoveContainer" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.055328 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f7a910c6-8772-4fc8-b557-8ca75235f11c" (UID: "f7a910c6-8772-4fc8-b557-8ca75235f11c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.072565 4687 scope.go:117] "RemoveContainer" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.085667 4687 scope.go:117] "RemoveContainer" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.102840 4687 scope.go:117] "RemoveContainer" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.116339 4687 scope.go:117] "RemoveContainer" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-systemd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127264 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-env-overrides\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-etc-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127347 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-node-log\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127362 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14a358f-1317-4410-acc5-d1b938c36f7a-ovn-node-metrics-cert\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-ovn\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-etc-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-var-lib-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127455 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-var-lib-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127482 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-config\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-node-log\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-ovn\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-netns\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127504 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-netns\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-openvswitch\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-run-systemd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127547 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-script-lib\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gh5\" (UniqueName: \"kubernetes.io/projected/f14a358f-1317-4410-acc5-d1b938c36f7a-kube-api-access-79gh5\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127663 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-log-socket\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127720 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127750 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-log-socket\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-bin\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-kubelet\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-slash\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-systemd-units\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-kubelet\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-netd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127957 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-systemd-units\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-slash\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.127912 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-bin\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14a358f-1317-4410-acc5-d1b938c36f7a-host-cni-netd\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128043 4687 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128064 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128083 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128101 4687 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-node-log\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128118 4687 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128136 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62c4j\" (UniqueName: \"kubernetes.io/projected/f7a910c6-8772-4fc8-b557-8ca75235f11c-kube-api-access-62c4j\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128153 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128170 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128187 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128203 4687 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128220 4687 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128238 4687 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128260 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7a910c6-8772-4fc8-b557-8ca75235f11c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128285 4687 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-slash\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128309 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128369 4687 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128392 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7a910c6-8772-4fc8-b557-8ca75235f11c-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128416 4687 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-log-socket\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128441 4687 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128460 4687 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7a910c6-8772-4fc8-b557-8ca75235f11c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-script-lib\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-env-overrides\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.128794 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14a358f-1317-4410-acc5-d1b938c36f7a-ovnkube-config\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.132029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14a358f-1317-4410-acc5-d1b938c36f7a-ovn-node-metrics-cert\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.146443 4687 scope.go:117] "RemoveContainer" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.147232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gh5\" (UniqueName: \"kubernetes.io/projected/f14a358f-1317-4410-acc5-d1b938c36f7a-kube-api-access-79gh5\") pod \"ovnkube-node-sk726\" (UID: \"f14a358f-1317-4410-acc5-d1b938c36f7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.162942 4687 scope.go:117] "RemoveContainer" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.175590 4687 scope.go:117] "RemoveContainer" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.192054 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.192374 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.192417 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} err="failed to get container status \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.192444 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.192851 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": container with ID starting with b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3 not found: ID does not exist" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.192875 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} err="failed to get container status \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": rpc error: code = NotFound desc = could not find container \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": container with ID starting with b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.192892 4687 scope.go:117] "RemoveContainer" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.193211 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": container with ID starting with 36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829 not found: ID does not exist" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193243 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} err="failed to get container status \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": rpc error: code = NotFound desc = could not find container \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": container with ID starting with 36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193261 4687 scope.go:117] "RemoveContainer" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.193507 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": container with ID starting with c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342 not found: ID does not exist" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193529 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} err="failed to get container status \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": rpc error: code = NotFound desc = could not find container \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": container with ID starting with c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193543 4687 scope.go:117] "RemoveContainer" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.193861 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": container with ID starting with b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def not found: ID does not exist" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193892 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} err="failed to get container status \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": rpc error: code = NotFound desc = could not find container \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": container with ID starting with b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.193910 4687 scope.go:117] "RemoveContainer" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.194229 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": container with ID starting with 655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066 not found: ID does not exist" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194266 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} err="failed to get container status \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": rpc error: code = NotFound desc = could not find container \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": container with ID starting with 655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194289 4687 scope.go:117] "RemoveContainer" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.194619 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": container with ID starting with 73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686 not found: ID does not exist" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194640 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} err="failed to get container status \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": rpc error: code = NotFound desc = could not find container \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": container with ID starting with 73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194653 4687 scope.go:117] "RemoveContainer" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.194948 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": container with ID starting with fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87 not found: ID does not exist" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194968 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} err="failed to get container status \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": rpc error: code = NotFound desc = could not find container \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": container with ID starting with fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.194979 4687 scope.go:117] "RemoveContainer" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.195398 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": container with ID starting with 77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03 not found: ID does not exist" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.195419 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} err="failed to get container status \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": rpc error: code = NotFound desc = could not find container \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": container with ID starting with 77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.195432 4687 scope.go:117] "RemoveContainer" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: E0314 09:10:09.195711 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": container with ID starting with a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a not found: ID does not exist" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.195742 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} err="failed to get container status \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": rpc error: code = NotFound desc = could not find container \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": container with ID starting with a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.195763 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.195990 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} err="failed to get container status \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196007 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196273 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} err="failed to get container status \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": rpc error: code = NotFound desc = could not find container \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": container with ID starting with b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196298 4687 scope.go:117] "RemoveContainer" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196521 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} err="failed to get container status \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": rpc error: code = NotFound desc = could not find container \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": container with ID starting with 36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196539 4687 scope.go:117] "RemoveContainer" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196768 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} err="failed to get container status \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": rpc error: code = NotFound desc = could not find container \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": container with ID starting with c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.196794 4687 scope.go:117] "RemoveContainer" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197005 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} err="failed to get container status \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": rpc error: code = NotFound desc = could not find container \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": container with ID starting with b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197025 4687 scope.go:117] "RemoveContainer" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197377 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} err="failed to get container status \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": rpc error: code = NotFound desc = could not find container \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": container with ID starting with 655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197438 4687 scope.go:117] "RemoveContainer" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197716 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} err="failed to get container status \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": rpc error: code = NotFound desc = could not find container \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": container with ID starting with 73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197751 4687 scope.go:117] "RemoveContainer" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.197984 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} err="failed to get container status \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": rpc error: code = NotFound desc = could not find container \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": container with ID starting with fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.198025 4687 scope.go:117] "RemoveContainer" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.198282 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} err="failed to get container status \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": rpc error: code = NotFound desc = could not find container \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": container with ID starting with 77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.198320 4687 scope.go:117] "RemoveContainer" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.198777 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} err="failed to get container status \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": rpc error: code = NotFound desc = could not find container \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": container with ID starting with a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.198795 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199154 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} err="failed to get container status \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199191 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199535 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} err="failed to get container status \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": rpc error: code = NotFound desc = could not find container \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": container with ID starting with b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199558 4687 scope.go:117] "RemoveContainer" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199764 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} err="failed to get container status \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": rpc error: code = NotFound desc = could not find container \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": container with ID starting with 36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199786 4687 scope.go:117] "RemoveContainer" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.199991 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} err="failed to get container status \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": rpc error: code = NotFound desc = could not find container \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": container with ID starting with c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200012 4687 scope.go:117] "RemoveContainer" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200216 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} err="failed to get container status \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": rpc error: code = NotFound desc = could not find container \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": container with ID starting with b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200236 4687 scope.go:117] "RemoveContainer" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200465 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} err="failed to get container status \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": rpc error: code = NotFound desc = could not find container \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": container with ID starting with 655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200486 4687 scope.go:117] "RemoveContainer" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200773 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} err="failed to get container status \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": rpc error: code = NotFound desc = could not find container \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": container with ID starting with 73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200793 4687 scope.go:117] "RemoveContainer" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.200997 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} err="failed to get container status \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": rpc error: code = NotFound desc = could not find container \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": container with ID starting with fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201014 4687 scope.go:117] "RemoveContainer" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201236 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} err="failed to get container status \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": rpc error: code = NotFound desc = could not find container \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": container with ID starting with 77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201266 4687 scope.go:117] "RemoveContainer" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201548 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} err="failed to get container status \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": rpc error: code = NotFound desc = could not find container \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": container with ID starting with a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201573 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201813 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} err="failed to get container status \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.201839 4687 scope.go:117] "RemoveContainer" containerID="b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.202132 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3"} err="failed to get container status \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": rpc error: code = NotFound desc = could not find container \"b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3\": container with ID starting with b3ba38b752f7e68eb6aeaa1816d3a7cb94d40c1e854cabc93c67314b44b0ecd3 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.202150 4687 scope.go:117] "RemoveContainer" containerID="36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.202470 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829"} err="failed to get container status \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": rpc error: code = NotFound desc = could not find container \"36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829\": container with ID starting with 36c61a9f13fbe3e88d3e92c8a91c8a9db5bc488980fec7004b8fea81deca8829 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.202505 4687 scope.go:117] "RemoveContainer" containerID="c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.203607 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342"} err="failed to get container status \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": rpc error: code = NotFound desc = could not find container \"c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342\": container with ID starting with c5a48433dd96151270a297549b044e7bc6e4f36c87079f31e9ab354b2d462342 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.203629 4687 scope.go:117] "RemoveContainer" containerID="b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.203912 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def"} err="failed to get container status \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": rpc error: code = NotFound desc = could not find container \"b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def\": container with ID starting with b5a730313ce19198ce0f38e1c2f57783bd4436b0bc12fa0cc29b20e561938def not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.203956 4687 scope.go:117] "RemoveContainer" containerID="655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204221 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066"} err="failed to get container status \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": rpc error: code = NotFound desc = could not find container \"655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066\": container with ID starting with 655ce029c941ad3a56325b035df72ca391a050855180cd0a9f623b3695cf5066 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204241 4687 scope.go:117] "RemoveContainer" containerID="73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204524 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686"} err="failed to get container status \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": rpc error: code = NotFound desc = could not find container \"73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686\": container with ID starting with 73cc470243ae82ad73dbcfc40c78d99e06a7251170f8cb3ff74aab978a680686 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204567 4687 scope.go:117] "RemoveContainer" containerID="fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204881 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87"} err="failed to get container status \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": rpc error: code = NotFound desc = could not find container \"fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87\": container with ID starting with fcff9aaf8e51b9a6dbc1c643ec1274f1e87dada981e6a8eeb89817829ff16c87 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.204924 4687 scope.go:117] "RemoveContainer" containerID="77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.205192 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03"} err="failed to get container status \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": rpc error: code = NotFound desc = could not find container \"77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03\": container with ID starting with 77a96fa22ccf180520ccde94eff69601e46e1d8ed034aa9324e8083ab78cbe03 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.205212 4687 scope.go:117] "RemoveContainer" containerID="a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.205577 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a"} err="failed to get container status \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": rpc error: code = NotFound desc = could not find container \"a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a\": container with ID starting with a7d43e1448c7b63594ec4814c60b0781b4b55822d4aa982f0d34eda8a469283a not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.205619 4687 scope.go:117] "RemoveContainer" containerID="613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.205946 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479"} err="failed to get container status \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": rpc error: code = NotFound desc = could not find container \"613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479\": container with ID starting with 613c34b4e4fd96556c9cf8e36a3f9ce51fbd25e546fa7963a846a7e31cee9479 not found: ID does not exist" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.339449 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.341747 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jkcr7"] Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.348786 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jkcr7"] Mar 14 09:10:09 crc kubenswrapper[4687]: I0314 09:10:09.744710 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a910c6-8772-4fc8-b557-8ca75235f11c" path="/var/lib/kubelet/pods/f7a910c6-8772-4fc8-b557-8ca75235f11c/volumes" Mar 14 09:10:10 crc kubenswrapper[4687]: I0314 09:10:10.000651 4687 generic.go:334] "Generic (PLEG): container finished" podID="f14a358f-1317-4410-acc5-d1b938c36f7a" containerID="8797db4e6a9e543f6cc310cc84868e70ab68366494852d17e3865a0a98408e1e" exitCode=0 Mar 14 09:10:10 crc kubenswrapper[4687]: I0314 09:10:10.000759 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerDied","Data":"8797db4e6a9e543f6cc310cc84868e70ab68366494852d17e3865a0a98408e1e"} Mar 14 09:10:10 crc kubenswrapper[4687]: I0314 09:10:10.000975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"35507c4317b67b1522e9249922a1eea6e91f6c9df20a8f63d623dd88511c6d7a"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.012821 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"ff47ccfd048343f7b05b2732ff392597d25a15abcd9228fe4d19912bcb196515"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.013238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"2c2f84bf0e92831e2f10ef3b81975eebf01dc59626af7cbfcea68964ef1ea7ca"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.013252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"098c94d444017c181173c112bbddcbe7442a06cf913e14261836a2d05d70dbe2"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.013262 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"911b483511828e3178e6a818adc70b2cf9d8e6acfc5ba7ca307033a4fcfbb334"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.013274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"b72159da623f10b667c97733f2de451253627f0fa641ff79d8b4670aa34c9be2"} Mar 14 09:10:11 crc kubenswrapper[4687]: I0314 09:10:11.013283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"9100bb9d009c9c9995bb5f4178defb72c3fbbb8e53594dc8833ecabb6d91c185"} Mar 14 09:10:14 crc kubenswrapper[4687]: I0314 09:10:14.040211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"533bc5f82d82e0ce1aba10bae7efe114ba044f4b939099df79d0b5c214de0d37"} Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.056587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" event={"ID":"f14a358f-1317-4410-acc5-d1b938c36f7a","Type":"ContainerStarted","Data":"b1fb25b278bc863488255a376f7c358eb55adac8908371d2e9aa24084bc14cc6"} Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.057022 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.057196 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.058038 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.090829 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.094622 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" podStartSLOduration=8.094604732 podStartE2EDuration="8.094604732s" podCreationTimestamp="2026-03-14 09:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:10:16.090118442 +0000 UTC m=+801.078358857" watchObservedRunningTime="2026-03-14 09:10:16.094604732 +0000 UTC m=+801.082845117" Mar 14 09:10:16 crc kubenswrapper[4687]: I0314 09:10:16.111807 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:23 crc kubenswrapper[4687]: I0314 09:10:23.737245 4687 scope.go:117] "RemoveContainer" containerID="edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0" Mar 14 09:10:23 crc kubenswrapper[4687]: E0314 09:10:23.738394 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xjjs4_openshift-multus(732cd580-e685-4b88-b227-b113c4be4c55)\"" pod="openshift-multus/multus-xjjs4" podUID="732cd580-e685-4b88-b227-b113c4be4c55" Mar 14 09:10:24 crc kubenswrapper[4687]: I0314 09:10:24.112149 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:10:24 crc kubenswrapper[4687]: I0314 09:10:24.112255 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.705848 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz"] Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.707780 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.709873 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.715293 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz"] Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.805817 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.805980 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.806069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bkg\" (UniqueName: \"kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.907558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.907688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.907737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bkg\" (UniqueName: \"kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.908496 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.908642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:28 crc kubenswrapper[4687]: I0314 09:10:28.940736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bkg\" (UniqueName: \"kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: I0314 09:10:29.023587 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.066832 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(659bf534e2692e9849ef28fdc9761f3aa380bc894620e710e896091f7342bba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.066967 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(659bf534e2692e9849ef28fdc9761f3aa380bc894620e710e896091f7342bba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.067032 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(659bf534e2692e9849ef28fdc9761f3aa380bc894620e710e896091f7342bba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.067141 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(659bf534e2692e9849ef28fdc9761f3aa380bc894620e710e896091f7342bba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" Mar 14 09:10:29 crc kubenswrapper[4687]: I0314 09:10:29.143397 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: I0314 09:10:29.144278 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.173247 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(5128da5313ca99b4baf524ab4eb2c040f1842035ad81fe14c2be0954d5213706): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.173317 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(5128da5313ca99b4baf524ab4eb2c040f1842035ad81fe14c2be0954d5213706): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.173371 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(5128da5313ca99b4baf524ab4eb2c040f1842035ad81fe14c2be0954d5213706): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:29 crc kubenswrapper[4687]: E0314 09:10:29.173429 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(5128da5313ca99b4baf524ab4eb2c040f1842035ad81fe14c2be0954d5213706): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" Mar 14 09:10:38 crc kubenswrapper[4687]: I0314 09:10:38.737678 4687 scope.go:117] "RemoveContainer" containerID="edc37d61cb2f7b5ae1fceebc723282c50ef7ce7c7467e6cc52dccbb222505bd0" Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.209618 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/2.log" Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.210920 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/1.log" Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.211001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjjs4" event={"ID":"732cd580-e685-4b88-b227-b113c4be4c55","Type":"ContainerStarted","Data":"5d53edbee71b075b9d6032877270ea4cac4409922c6db091c1b3042aaf9b71e1"} Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.421557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sk726" Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.736724 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:39 crc kubenswrapper[4687]: I0314 09:10:39.737670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:39 crc kubenswrapper[4687]: E0314 09:10:39.788140 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(8bccda156a5bba886cadef99bad94d9b111dfb3a33f61a6f56d38576fa6f49f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:39 crc kubenswrapper[4687]: E0314 09:10:39.788215 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(8bccda156a5bba886cadef99bad94d9b111dfb3a33f61a6f56d38576fa6f49f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:39 crc kubenswrapper[4687]: E0314 09:10:39.788248 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(8bccda156a5bba886cadef99bad94d9b111dfb3a33f61a6f56d38576fa6f49f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:39 crc kubenswrapper[4687]: E0314 09:10:39.788311 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace(575133e9-f490-4a9e-b062-f8b33b86ef27)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_openshift-marketplace_575133e9-f490-4a9e-b062-f8b33b86ef27_0(8bccda156a5bba886cadef99bad94d9b111dfb3a33f61a6f56d38576fa6f49f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" Mar 14 09:10:52 crc kubenswrapper[4687]: I0314 09:10:52.736790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:52 crc kubenswrapper[4687]: I0314 09:10:52.738318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:52 crc kubenswrapper[4687]: I0314 09:10:52.990388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz"] Mar 14 09:10:53 crc kubenswrapper[4687]: I0314 09:10:53.313568 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerStarted","Data":"ef266cfd11c65097309e1f3063c054866b93549f57d017a9b16686e6d6176161"} Mar 14 09:10:53 crc kubenswrapper[4687]: I0314 09:10:53.313878 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerStarted","Data":"8e5bccaf9d3b0bafb4efa1e7344ce3ea0b5aabe4cf3bb73a246a3b01f458083d"} Mar 14 09:10:54 crc kubenswrapper[4687]: I0314 09:10:54.111177 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:10:54 crc kubenswrapper[4687]: I0314 09:10:54.111578 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:10:54 crc kubenswrapper[4687]: I0314 09:10:54.321236 4687 generic.go:334] "Generic (PLEG): container finished" podID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerID="ef266cfd11c65097309e1f3063c054866b93549f57d017a9b16686e6d6176161" exitCode=0 Mar 14 09:10:54 crc kubenswrapper[4687]: I0314 09:10:54.321292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerDied","Data":"ef266cfd11c65097309e1f3063c054866b93549f57d017a9b16686e6d6176161"} Mar 14 09:10:56 crc kubenswrapper[4687]: I0314 09:10:56.335277 4687 generic.go:334] "Generic (PLEG): container finished" podID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerID="06b64b0d82cb4c5f83370b33c0e341d59dd6610d972b1bccc2f94882078c1a84" exitCode=0 Mar 14 09:10:56 crc kubenswrapper[4687]: I0314 09:10:56.335373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerDied","Data":"06b64b0d82cb4c5f83370b33c0e341d59dd6610d972b1bccc2f94882078c1a84"} Mar 14 09:10:57 crc kubenswrapper[4687]: I0314 09:10:57.347780 4687 generic.go:334] "Generic (PLEG): container finished" podID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerID="6de6f50e2a91cf922623bd24a1c990c92a75f0260757580fa4ac39c17785048f" exitCode=0 Mar 14 09:10:57 crc kubenswrapper[4687]: I0314 09:10:57.347896 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerDied","Data":"6de6f50e2a91cf922623bd24a1c990c92a75f0260757580fa4ac39c17785048f"} Mar 14 09:10:57 crc kubenswrapper[4687]: I0314 09:10:57.866729 4687 scope.go:117] "RemoveContainer" containerID="67898b02166c9e69d6854a87aa266c2fca6e507f9730673d224f3f214d6c36a3" Mar 14 09:10:57 crc kubenswrapper[4687]: I0314 09:10:57.918086 4687 scope.go:117] "RemoveContainer" containerID="f0e116b3c20ee740ddafce4edf4e8c51634a2efc260e531803da97a3a82a128e" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.360016 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xjjs4_732cd580-e685-4b88-b227-b113c4be4c55/kube-multus/2.log" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.686317 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.728190 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle\") pod \"575133e9-f490-4a9e-b062-f8b33b86ef27\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.728374 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util\") pod \"575133e9-f490-4a9e-b062-f8b33b86ef27\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.728487 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9bkg\" (UniqueName: \"kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg\") pod \"575133e9-f490-4a9e-b062-f8b33b86ef27\" (UID: \"575133e9-f490-4a9e-b062-f8b33b86ef27\") " Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.731051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle" (OuterVolumeSpecName: "bundle") pod "575133e9-f490-4a9e-b062-f8b33b86ef27" (UID: "575133e9-f490-4a9e-b062-f8b33b86ef27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.734948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg" (OuterVolumeSpecName: "kube-api-access-w9bkg") pod "575133e9-f490-4a9e-b062-f8b33b86ef27" (UID: "575133e9-f490-4a9e-b062-f8b33b86ef27"). InnerVolumeSpecName "kube-api-access-w9bkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.829621 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9bkg\" (UniqueName: \"kubernetes.io/projected/575133e9-f490-4a9e-b062-f8b33b86ef27-kube-api-access-w9bkg\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.829655 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.901246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util" (OuterVolumeSpecName: "util") pod "575133e9-f490-4a9e-b062-f8b33b86ef27" (UID: "575133e9-f490-4a9e-b062-f8b33b86ef27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:10:58 crc kubenswrapper[4687]: I0314 09:10:58.930605 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/575133e9-f490-4a9e-b062-f8b33b86ef27-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:59 crc kubenswrapper[4687]: I0314 09:10:59.367658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" event={"ID":"575133e9-f490-4a9e-b062-f8b33b86ef27","Type":"ContainerDied","Data":"8e5bccaf9d3b0bafb4efa1e7344ce3ea0b5aabe4cf3bb73a246a3b01f458083d"} Mar 14 09:10:59 crc kubenswrapper[4687]: I0314 09:10:59.367695 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5bccaf9d3b0bafb4efa1e7344ce3ea0b5aabe4cf3bb73a246a3b01f458083d" Mar 14 09:10:59 crc kubenswrapper[4687]: I0314 09:10:59.367735 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz" Mar 14 09:11:02 crc kubenswrapper[4687]: I0314 09:11:02.496183 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.044297 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t"] Mar 14 09:11:12 crc kubenswrapper[4687]: E0314 09:11:12.044949 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="pull" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.044961 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="pull" Mar 14 09:11:12 crc kubenswrapper[4687]: E0314 09:11:12.044976 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="util" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.044983 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="util" Mar 14 09:11:12 crc kubenswrapper[4687]: E0314 09:11:12.044991 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="extract" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.044997 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="extract" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.045081 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="575133e9-f490-4a9e-b062-f8b33b86ef27" containerName="extract" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.045428 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.046965 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.047081 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ttlq2" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.047163 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.064856 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.174035 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.174870 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.177927 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-9fs5h" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.178172 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.186182 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.190949 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.191761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.191789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.191833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvp9\" (UniqueName: \"kubernetes.io/projected/205a8fe2-db14-415e-9c48-f83103f799a6-kube-api-access-czvp9\") pod \"obo-prometheus-operator-68bc856cb9-zvl6t\" (UID: \"205a8fe2-db14-415e-9c48-f83103f799a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.191892 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.214415 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.285641 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9mngh"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.286595 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.288374 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-l7n4h" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.288834 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.293360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvp9\" (UniqueName: \"kubernetes.io/projected/205a8fe2-db14-415e-9c48-f83103f799a6-kube-api-access-czvp9\") pod \"obo-prometheus-operator-68bc856cb9-zvl6t\" (UID: \"205a8fe2-db14-415e-9c48-f83103f799a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.293933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.295429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.304232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.304374 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0e89855-d75c-4904-9632-30763bfbe2d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz\" (UID: \"c0e89855-d75c-4904-9632-30763bfbe2d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.308595 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9mngh"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.314453 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvp9\" (UniqueName: \"kubernetes.io/projected/205a8fe2-db14-415e-9c48-f83103f799a6-kube-api-access-czvp9\") pod \"obo-prometheus-operator-68bc856cb9-zvl6t\" (UID: \"205a8fe2-db14-415e-9c48-f83103f799a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.361859 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.395815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.395885 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.395935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ppg\" (UniqueName: \"kubernetes.io/projected/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-kube-api-access-76ppg\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.395984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.396442 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p2jnw"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.397083 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.404962 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zlzds" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.406230 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p2jnw"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.495849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.496694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.496724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.496758 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ppg\" (UniqueName: \"kubernetes.io/projected/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-kube-api-access-76ppg\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.496793 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.502498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.510973 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.520151 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bceb653-5d73-4f96-a7aa-fdd6aaa604f9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8\" (UID: \"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.529598 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ppg\" (UniqueName: \"kubernetes.io/projected/4359a7ad-0b1c-42da-bba2-abfbf773cdfc-kube-api-access-76ppg\") pod \"observability-operator-59bdc8b94-9mngh\" (UID: \"4359a7ad-0b1c-42da-bba2-abfbf773cdfc\") " pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.598703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d73d1b3-dbad-4f27-8562-e534f69c896c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.599089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd7j\" (UniqueName: \"kubernetes.io/projected/7d73d1b3-dbad-4f27-8562-e534f69c896c-kube-api-access-6fd7j\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.650216 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.700149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fd7j\" (UniqueName: \"kubernetes.io/projected/7d73d1b3-dbad-4f27-8562-e534f69c896c-kube-api-access-6fd7j\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.700248 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d73d1b3-dbad-4f27-8562-e534f69c896c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.701554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d73d1b3-dbad-4f27-8562-e534f69c896c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.717037 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fd7j\" (UniqueName: \"kubernetes.io/projected/7d73d1b3-dbad-4f27-8562-e534f69c896c-kube-api-access-6fd7j\") pod \"perses-operator-5bf474d74f-p2jnw\" (UID: \"7d73d1b3-dbad-4f27-8562-e534f69c896c\") " pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.731777 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.752749 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.771961 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.798972 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t"] Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.812004 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" Mar 14 09:11:12 crc kubenswrapper[4687]: W0314 09:11:12.815051 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod205a8fe2_db14_415e_9c48_f83103f799a6.slice/crio-8b7b3c610f664085594996dbc9e1872c4a747015be7cbea5b62f67586fea0c47 WatchSource:0}: Error finding container 8b7b3c610f664085594996dbc9e1872c4a747015be7cbea5b62f67586fea0c47: Status 404 returned error can't find the container with id 8b7b3c610f664085594996dbc9e1872c4a747015be7cbea5b62f67586fea0c47 Mar 14 09:11:12 crc kubenswrapper[4687]: I0314 09:11:12.878606 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9mngh"] Mar 14 09:11:12 crc kubenswrapper[4687]: W0314 09:11:12.889240 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4359a7ad_0b1c_42da_bba2_abfbf773cdfc.slice/crio-85ee033a3a7036cdbe15daa5dda20cae36330389b87b6f29500f53133d425320 WatchSource:0}: Error finding container 85ee033a3a7036cdbe15daa5dda20cae36330389b87b6f29500f53133d425320: Status 404 returned error can't find the container with id 85ee033a3a7036cdbe15daa5dda20cae36330389b87b6f29500f53133d425320 Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.026142 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p2jnw"] Mar 14 09:11:13 crc kubenswrapper[4687]: W0314 09:11:13.035131 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d73d1b3_dbad_4f27_8562_e534f69c896c.slice/crio-ffbd116649695f324119def6c6190708e8688345d40aabfbeb07a6c48b6525a0 WatchSource:0}: Error finding container ffbd116649695f324119def6c6190708e8688345d40aabfbeb07a6c48b6525a0: Status 404 returned error can't find the container with id ffbd116649695f324119def6c6190708e8688345d40aabfbeb07a6c48b6525a0 Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.058040 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8"] Mar 14 09:11:13 crc kubenswrapper[4687]: W0314 09:11:13.065480 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bceb653_5d73_4f96_a7aa_fdd6aaa604f9.slice/crio-6c75bcd4fa95de27aab0c16a00720d54ad7e6f8e7493839623174cbaa634daf7 WatchSource:0}: Error finding container 6c75bcd4fa95de27aab0c16a00720d54ad7e6f8e7493839623174cbaa634daf7: Status 404 returned error can't find the container with id 6c75bcd4fa95de27aab0c16a00720d54ad7e6f8e7493839623174cbaa634daf7 Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.452522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" event={"ID":"c0e89855-d75c-4904-9632-30763bfbe2d7","Type":"ContainerStarted","Data":"03e61409a2c067207f1c1a662583b1dca4b2eba70bca7352c059ea24d8e9a7ab"} Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.453632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" event={"ID":"205a8fe2-db14-415e-9c48-f83103f799a6","Type":"ContainerStarted","Data":"8b7b3c610f664085594996dbc9e1872c4a747015be7cbea5b62f67586fea0c47"} Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.454454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" event={"ID":"7d73d1b3-dbad-4f27-8562-e534f69c896c","Type":"ContainerStarted","Data":"ffbd116649695f324119def6c6190708e8688345d40aabfbeb07a6c48b6525a0"} Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.455188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" event={"ID":"4359a7ad-0b1c-42da-bba2-abfbf773cdfc","Type":"ContainerStarted","Data":"85ee033a3a7036cdbe15daa5dda20cae36330389b87b6f29500f53133d425320"} Mar 14 09:11:13 crc kubenswrapper[4687]: I0314 09:11:13.456208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" event={"ID":"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9","Type":"ContainerStarted","Data":"6c75bcd4fa95de27aab0c16a00720d54ad7e6f8e7493839623174cbaa634daf7"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.525368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" event={"ID":"205a8fe2-db14-415e-9c48-f83103f799a6","Type":"ContainerStarted","Data":"1319d4339b1c4ce196517a5c7ecfa28d85888fe146d61397809d719f4732722b"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.526668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" event={"ID":"c0e89855-d75c-4904-9632-30763bfbe2d7","Type":"ContainerStarted","Data":"20afd34c7e574d06060e0ae337c3ab9fdc5225956dbc4c52de5910df49d4f649"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.528107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" event={"ID":"7d73d1b3-dbad-4f27-8562-e534f69c896c","Type":"ContainerStarted","Data":"f3e092328bffa09f2d2c3c0cd4aed8e8534155b3dbd173b713541c63e802fcf4"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.528193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.529411 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" event={"ID":"4359a7ad-0b1c-42da-bba2-abfbf773cdfc","Type":"ContainerStarted","Data":"3fe9fa84303ce89b5d57f6b8e2a36ff163cbce3c1d9253da2656a5f1f8ee4b03"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.529885 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.530637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" event={"ID":"7bceb653-5d73-4f96-a7aa-fdd6aaa604f9","Type":"ContainerStarted","Data":"13b404e8e51abbaa7e86e161cc5a0d1f9fb21d49992b90fa78c7dabf66d613f8"} Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.545252 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zvl6t" podStartSLOduration=1.442949914 podStartE2EDuration="10.545234948s" podCreationTimestamp="2026-03-14 09:11:12 +0000 UTC" firstStartedPulling="2026-03-14 09:11:12.822252327 +0000 UTC m=+857.810492702" lastFinishedPulling="2026-03-14 09:11:21.924537361 +0000 UTC m=+866.912777736" observedRunningTime="2026-03-14 09:11:22.540650515 +0000 UTC m=+867.528890890" watchObservedRunningTime="2026-03-14 09:11:22.545234948 +0000 UTC m=+867.533475333" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.559974 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.566528 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" podStartSLOduration=1.712461991 podStartE2EDuration="10.566510242s" podCreationTimestamp="2026-03-14 09:11:12 +0000 UTC" firstStartedPulling="2026-03-14 09:11:13.03757112 +0000 UTC m=+858.025811495" lastFinishedPulling="2026-03-14 09:11:21.891619371 +0000 UTC m=+866.879859746" observedRunningTime="2026-03-14 09:11:22.562103074 +0000 UTC m=+867.550343439" watchObservedRunningTime="2026-03-14 09:11:22.566510242 +0000 UTC m=+867.554750617" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.583795 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz" podStartSLOduration=1.464627228 podStartE2EDuration="10.583777997s" podCreationTimestamp="2026-03-14 09:11:12 +0000 UTC" firstStartedPulling="2026-03-14 09:11:12.771766334 +0000 UTC m=+857.760006709" lastFinishedPulling="2026-03-14 09:11:21.890917103 +0000 UTC m=+866.879157478" observedRunningTime="2026-03-14 09:11:22.582084735 +0000 UTC m=+867.570325130" watchObservedRunningTime="2026-03-14 09:11:22.583777997 +0000 UTC m=+867.572018382" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.619413 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9mngh" podStartSLOduration=1.583567358 podStartE2EDuration="10.619396915s" podCreationTimestamp="2026-03-14 09:11:12 +0000 UTC" firstStartedPulling="2026-03-14 09:11:12.892619531 +0000 UTC m=+857.880859906" lastFinishedPulling="2026-03-14 09:11:21.928449088 +0000 UTC m=+866.916689463" observedRunningTime="2026-03-14 09:11:22.616669817 +0000 UTC m=+867.604910212" watchObservedRunningTime="2026-03-14 09:11:22.619396915 +0000 UTC m=+867.607637310" Mar 14 09:11:22 crc kubenswrapper[4687]: I0314 09:11:22.634965 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8" podStartSLOduration=1.787615482 podStartE2EDuration="10.634952068s" podCreationTimestamp="2026-03-14 09:11:12 +0000 UTC" firstStartedPulling="2026-03-14 09:11:13.068552743 +0000 UTC m=+858.056793108" lastFinishedPulling="2026-03-14 09:11:21.915889319 +0000 UTC m=+866.904129694" observedRunningTime="2026-03-14 09:11:22.632358814 +0000 UTC m=+867.620599209" watchObservedRunningTime="2026-03-14 09:11:22.634952068 +0000 UTC m=+867.623192443" Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.110937 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.110989 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.111025 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.111516 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.111565 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749" gracePeriod=600 Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.544102 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749" exitCode=0 Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.544158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749"} Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.544477 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b"} Mar 14 09:11:24 crc kubenswrapper[4687]: I0314 09:11:24.544497 4687 scope.go:117] "RemoveContainer" containerID="2a07a991210fff78aa4722b6a920d2dd3d187b295a733c481e20d13ac8c760ab" Mar 14 09:11:32 crc kubenswrapper[4687]: I0314 09:11:32.734212 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-p2jnw" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.619653 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9"] Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.621304 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.624755 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.633177 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9"] Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.714860 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.714921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.714938 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzzt\" (UniqueName: \"kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.816164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.816276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.816301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twzzt\" (UniqueName: \"kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.816974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.817615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.835027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzzt\" (UniqueName: \"kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:49 crc kubenswrapper[4687]: I0314 09:11:49.937040 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:50 crc kubenswrapper[4687]: I0314 09:11:50.402434 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9"] Mar 14 09:11:50 crc kubenswrapper[4687]: I0314 09:11:50.702819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerStarted","Data":"b92f42b724c8f3055257e97eb1ecba53ff99123e3c1793ffe418f7df7b0b44a1"} Mar 14 09:11:51 crc kubenswrapper[4687]: I0314 09:11:51.718417 4687 generic.go:334] "Generic (PLEG): container finished" podID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerID="fe34e2ca9eefa548eb1d2063a8eebe94b9201e57d79e74b91857937fb5afc200" exitCode=0 Mar 14 09:11:51 crc kubenswrapper[4687]: I0314 09:11:51.718485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerDied","Data":"fe34e2ca9eefa548eb1d2063a8eebe94b9201e57d79e74b91857937fb5afc200"} Mar 14 09:11:51 crc kubenswrapper[4687]: I0314 09:11:51.997700 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.000385 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.019633 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.049117 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjq8\" (UniqueName: \"kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.049228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.049324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.150426 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjq8\" (UniqueName: \"kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.150556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.150665 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.151184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.151422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.180350 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjq8\" (UniqueName: \"kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8\") pod \"redhat-operators-7b5vb\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.317284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.542135 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:11:52 crc kubenswrapper[4687]: I0314 09:11:52.738699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerStarted","Data":"4e9837957948720af540cdc5db823397f9055ae0000c561e33c1bfaad354a354"} Mar 14 09:11:53 crc kubenswrapper[4687]: I0314 09:11:53.744636 4687 generic.go:334] "Generic (PLEG): container finished" podID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerID="7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af" exitCode=0 Mar 14 09:11:53 crc kubenswrapper[4687]: I0314 09:11:53.744881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerDied","Data":"7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af"} Mar 14 09:11:53 crc kubenswrapper[4687]: I0314 09:11:53.747534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerStarted","Data":"1d449ff67627a9c07a83d7bd377659ce04e354c7f77ae706b9fe26d46a561c24"} Mar 14 09:11:54 crc kubenswrapper[4687]: I0314 09:11:54.759167 4687 generic.go:334] "Generic (PLEG): container finished" podID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerID="1d449ff67627a9c07a83d7bd377659ce04e354c7f77ae706b9fe26d46a561c24" exitCode=0 Mar 14 09:11:54 crc kubenswrapper[4687]: I0314 09:11:54.759238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerDied","Data":"1d449ff67627a9c07a83d7bd377659ce04e354c7f77ae706b9fe26d46a561c24"} Mar 14 09:11:55 crc kubenswrapper[4687]: I0314 09:11:55.765533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerStarted","Data":"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9"} Mar 14 09:11:55 crc kubenswrapper[4687]: I0314 09:11:55.768371 4687 generic.go:334] "Generic (PLEG): container finished" podID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerID="40f4c7822c09d36f31d4a7bd4841a382dd5e2798e86b3aac4be37443312efaba" exitCode=0 Mar 14 09:11:55 crc kubenswrapper[4687]: I0314 09:11:55.768412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerDied","Data":"40f4c7822c09d36f31d4a7bd4841a382dd5e2798e86b3aac4be37443312efaba"} Mar 14 09:11:56 crc kubenswrapper[4687]: I0314 09:11:56.777944 4687 generic.go:334] "Generic (PLEG): container finished" podID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerID="6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9" exitCode=0 Mar 14 09:11:56 crc kubenswrapper[4687]: I0314 09:11:56.778221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerDied","Data":"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9"} Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.024770 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.111967 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twzzt\" (UniqueName: \"kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt\") pod \"26714b78-95c2-42a9-bb50-a728f54b5c8a\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.112025 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util\") pod \"26714b78-95c2-42a9-bb50-a728f54b5c8a\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.116597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle\") pod \"26714b78-95c2-42a9-bb50-a728f54b5c8a\" (UID: \"26714b78-95c2-42a9-bb50-a728f54b5c8a\") " Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.117709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle" (OuterVolumeSpecName: "bundle") pod "26714b78-95c2-42a9-bb50-a728f54b5c8a" (UID: "26714b78-95c2-42a9-bb50-a728f54b5c8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.118192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt" (OuterVolumeSpecName: "kube-api-access-twzzt") pod "26714b78-95c2-42a9-bb50-a728f54b5c8a" (UID: "26714b78-95c2-42a9-bb50-a728f54b5c8a"). InnerVolumeSpecName "kube-api-access-twzzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.121799 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util" (OuterVolumeSpecName: "util") pod "26714b78-95c2-42a9-bb50-a728f54b5c8a" (UID: "26714b78-95c2-42a9-bb50-a728f54b5c8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.218048 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.218094 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twzzt\" (UniqueName: \"kubernetes.io/projected/26714b78-95c2-42a9-bb50-a728f54b5c8a-kube-api-access-twzzt\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.218111 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26714b78-95c2-42a9-bb50-a728f54b5c8a-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.908152 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.916796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9" event={"ID":"26714b78-95c2-42a9-bb50-a728f54b5c8a","Type":"ContainerDied","Data":"b92f42b724c8f3055257e97eb1ecba53ff99123e3c1793ffe418f7df7b0b44a1"} Mar 14 09:11:57 crc kubenswrapper[4687]: I0314 09:11:57.916842 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92f42b724c8f3055257e97eb1ecba53ff99123e3c1793ffe418f7df7b0b44a1" Mar 14 09:11:58 crc kubenswrapper[4687]: I0314 09:11:58.916805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerStarted","Data":"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea"} Mar 14 09:11:58 crc kubenswrapper[4687]: I0314 09:11:58.937947 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7b5vb" podStartSLOduration=3.68526108 podStartE2EDuration="7.937929621s" podCreationTimestamp="2026-03-14 09:11:51 +0000 UTC" firstStartedPulling="2026-03-14 09:11:53.746437729 +0000 UTC m=+898.734678104" lastFinishedPulling="2026-03-14 09:11:57.99910626 +0000 UTC m=+902.987346645" observedRunningTime="2026-03-14 09:11:58.937378517 +0000 UTC m=+903.925618932" watchObservedRunningTime="2026-03-14 09:11:58.937929621 +0000 UTC m=+903.926169996" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.138882 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557992-wbxz7"] Mar 14 09:12:00 crc kubenswrapper[4687]: E0314 09:12:00.139394 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="pull" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.139406 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="pull" Mar 14 09:12:00 crc kubenswrapper[4687]: E0314 09:12:00.139420 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="util" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.139426 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="util" Mar 14 09:12:00 crc kubenswrapper[4687]: E0314 09:12:00.139441 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="extract" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.139447 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="extract" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.139542 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="26714b78-95c2-42a9-bb50-a728f54b5c8a" containerName="extract" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.139907 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.141960 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.142074 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.142606 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.153102 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-wbxz7"] Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.258549 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqv6z\" (UniqueName: \"kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z\") pod \"auto-csr-approver-29557992-wbxz7\" (UID: \"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a\") " pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.359953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqv6z\" (UniqueName: \"kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z\") pod \"auto-csr-approver-29557992-wbxz7\" (UID: \"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a\") " pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.380589 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqv6z\" (UniqueName: \"kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z\") pod \"auto-csr-approver-29557992-wbxz7\" (UID: \"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a\") " pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.453897 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.825469 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d"] Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.826538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.828287 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-r44vs" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.828608 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.829023 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.839050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d"] Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.865442 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4shx\" (UniqueName: \"kubernetes.io/projected/2674f9c4-63f3-446d-9dfd-7df8abe18d59-kube-api-access-c4shx\") pod \"nmstate-operator-796d4cfff4-zxl4d\" (UID: \"2674f9c4-63f3-446d-9dfd-7df8abe18d59\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.930797 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-wbxz7"] Mar 14 09:12:00 crc kubenswrapper[4687]: W0314 09:12:00.939676 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f15f9e9_d95b_4b09_9ac6_a3aa446aca5a.slice/crio-a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75 WatchSource:0}: Error finding container a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75: Status 404 returned error can't find the container with id a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75 Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.966290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4shx\" (UniqueName: \"kubernetes.io/projected/2674f9c4-63f3-446d-9dfd-7df8abe18d59-kube-api-access-c4shx\") pod \"nmstate-operator-796d4cfff4-zxl4d\" (UID: \"2674f9c4-63f3-446d-9dfd-7df8abe18d59\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" Mar 14 09:12:00 crc kubenswrapper[4687]: I0314 09:12:00.983896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4shx\" (UniqueName: \"kubernetes.io/projected/2674f9c4-63f3-446d-9dfd-7df8abe18d59-kube-api-access-c4shx\") pod \"nmstate-operator-796d4cfff4-zxl4d\" (UID: \"2674f9c4-63f3-446d-9dfd-7df8abe18d59\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" Mar 14 09:12:01 crc kubenswrapper[4687]: I0314 09:12:01.146931 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" Mar 14 09:12:01 crc kubenswrapper[4687]: I0314 09:12:01.380879 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d"] Mar 14 09:12:01 crc kubenswrapper[4687]: I0314 09:12:01.933400 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" event={"ID":"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a","Type":"ContainerStarted","Data":"a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75"} Mar 14 09:12:01 crc kubenswrapper[4687]: I0314 09:12:01.936990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" event={"ID":"2674f9c4-63f3-446d-9dfd-7df8abe18d59","Type":"ContainerStarted","Data":"c3191f4864943951c4527c260552ce9aa28c30e9dff4c63bb16595809cc083b5"} Mar 14 09:12:02 crc kubenswrapper[4687]: I0314 09:12:02.318044 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:02 crc kubenswrapper[4687]: I0314 09:12:02.318487 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:02 crc kubenswrapper[4687]: I0314 09:12:02.943725 4687 generic.go:334] "Generic (PLEG): container finished" podID="2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" containerID="8f4267d3e2545e9e23176c353790c680d7bd27ee9596d7727dedfa495f39ed03" exitCode=0 Mar 14 09:12:02 crc kubenswrapper[4687]: I0314 09:12:02.943800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" event={"ID":"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a","Type":"ContainerDied","Data":"8f4267d3e2545e9e23176c353790c680d7bd27ee9596d7727dedfa495f39ed03"} Mar 14 09:12:03 crc kubenswrapper[4687]: I0314 09:12:03.359294 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7b5vb" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="registry-server" probeResult="failure" output=< Mar 14 09:12:03 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:12:03 crc kubenswrapper[4687]: > Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.204287 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.307424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqv6z\" (UniqueName: \"kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z\") pod \"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a\" (UID: \"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a\") " Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.313814 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z" (OuterVolumeSpecName: "kube-api-access-hqv6z") pod "2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" (UID: "2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a"). InnerVolumeSpecName "kube-api-access-hqv6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.409179 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqv6z\" (UniqueName: \"kubernetes.io/projected/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a-kube-api-access-hqv6z\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.961247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" event={"ID":"2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a","Type":"ContainerDied","Data":"a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75"} Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.961283 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96e2e08a081409507656c47c5ea760a5bde13a03d87c261124e3b25dd6f5a75" Mar 14 09:12:04 crc kubenswrapper[4687]: I0314 09:12:04.961300 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-wbxz7" Mar 14 09:12:05 crc kubenswrapper[4687]: I0314 09:12:05.268914 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-2xszk"] Mar 14 09:12:05 crc kubenswrapper[4687]: I0314 09:12:05.274941 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-2xszk"] Mar 14 09:12:05 crc kubenswrapper[4687]: I0314 09:12:05.745308 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="029b6f09-ba0c-427c-ac9d-5092188dab67" path="/var/lib/kubelet/pods/029b6f09-ba0c-427c-ac9d-5092188dab67/volumes" Mar 14 09:12:05 crc kubenswrapper[4687]: I0314 09:12:05.971024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" event={"ID":"2674f9c4-63f3-446d-9dfd-7df8abe18d59","Type":"ContainerStarted","Data":"d7b54cb3f80e798fdaec459c78b68e0e2a51d162197f8c473f9053ed1507c1ce"} Mar 14 09:12:05 crc kubenswrapper[4687]: I0314 09:12:05.993920 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-zxl4d" podStartSLOduration=2.487329217 podStartE2EDuration="5.993897984s" podCreationTimestamp="2026-03-14 09:12:00 +0000 UTC" firstStartedPulling="2026-03-14 09:12:01.389775204 +0000 UTC m=+906.378015579" lastFinishedPulling="2026-03-14 09:12:04.896343971 +0000 UTC m=+909.884584346" observedRunningTime="2026-03-14 09:12:05.987313712 +0000 UTC m=+910.975554127" watchObservedRunningTime="2026-03-14 09:12:05.993897984 +0000 UTC m=+910.982138369" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.927988 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb"] Mar 14 09:12:10 crc kubenswrapper[4687]: E0314 09:12:10.928585 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" containerName="oc" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.928601 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" containerName="oc" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.928723 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" containerName="oc" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.929415 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.931595 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-v7kxr" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.946156 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq"] Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.957258 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb"] Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.968212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.969547 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq"] Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.982760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.996019 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8q525"] Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.997217 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5jf\" (UniqueName: \"kubernetes.io/projected/76fc1cd0-a564-4de3-9a3b-d420615cd640-kube-api-access-9g5jf\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.997275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbdv\" (UniqueName: \"kubernetes.io/projected/cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c-kube-api-access-qvbdv\") pod \"nmstate-metrics-9b8c8685d-klcqb\" (UID: \"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.997342 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:10 crc kubenswrapper[4687]: I0314 09:12:10.997608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.080288 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.081213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.084761 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gpt7l" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.084869 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.085062 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.094801 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.097955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5jf\" (UniqueName: \"kubernetes.io/projected/76fc1cd0-a564-4de3-9a3b-d420615cd640-kube-api-access-9g5jf\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098010 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-ovs-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098035 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-dbus-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbdv\" (UniqueName: \"kubernetes.io/projected/cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c-kube-api-access-qvbdv\") pod \"nmstate-metrics-9b8c8685d-klcqb\" (UID: \"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098085 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqv7\" (UniqueName: \"kubernetes.io/projected/e25c82b0-63e6-47a6-9111-d35760fac0cf-kube-api-access-9qqv7\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-nmstate-lock\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.098134 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:11 crc kubenswrapper[4687]: E0314 09:12:11.098246 4687 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 14 09:12:11 crc kubenswrapper[4687]: E0314 09:12:11.098291 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair podName:76fc1cd0-a564-4de3-9a3b-d420615cd640 nodeName:}" failed. No retries permitted until 2026-03-14 09:12:11.598274872 +0000 UTC m=+916.586515247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair") pod "nmstate-webhook-5f558f5558-h5hgq" (UID: "76fc1cd0-a564-4de3-9a3b-d420615cd640") : secret "openshift-nmstate-webhook" not found Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.120564 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5jf\" (UniqueName: \"kubernetes.io/projected/76fc1cd0-a564-4de3-9a3b-d420615cd640-kube-api-access-9g5jf\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.122079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbdv\" (UniqueName: \"kubernetes.io/projected/cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c-kube-api-access-qvbdv\") pod \"nmstate-metrics-9b8c8685d-klcqb\" (UID: \"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-ovs-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-dbus-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecfaf9cc-3278-4efe-8dcc-050341daded0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2pj\" (UniqueName: \"kubernetes.io/projected/ecfaf9cc-3278-4efe-8dcc-050341daded0-kube-api-access-zx2pj\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfaf9cc-3278-4efe-8dcc-050341daded0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qqv7\" (UniqueName: \"kubernetes.io/projected/e25c82b0-63e6-47a6-9111-d35760fac0cf-kube-api-access-9qqv7\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199705 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-nmstate-lock\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-nmstate-lock\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.199849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-ovs-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.200091 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e25c82b0-63e6-47a6-9111-d35760fac0cf-dbus-socket\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.234948 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qqv7\" (UniqueName: \"kubernetes.io/projected/e25c82b0-63e6-47a6-9111-d35760fac0cf-kube-api-access-9qqv7\") pod \"nmstate-handler-8q525\" (UID: \"e25c82b0-63e6-47a6-9111-d35760fac0cf\") " pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.246928 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.275868 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74fd676476-2bbjn"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.278049 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.292201 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74fd676476-2bbjn"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.309047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2pj\" (UniqueName: \"kubernetes.io/projected/ecfaf9cc-3278-4efe-8dcc-050341daded0-kube-api-access-zx2pj\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.309101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfaf9cc-3278-4efe-8dcc-050341daded0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.309221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecfaf9cc-3278-4efe-8dcc-050341daded0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.310528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecfaf9cc-3278-4efe-8dcc-050341daded0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.314513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfaf9cc-3278-4efe-8dcc-050341daded0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.318676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.329856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2pj\" (UniqueName: \"kubernetes.io/projected/ecfaf9cc-3278-4efe-8dcc-050341daded0-kube-api-access-zx2pj\") pod \"nmstate-console-plugin-86f58fcf4-nd982\" (UID: \"ecfaf9cc-3278-4efe-8dcc-050341daded0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: W0314 09:12:11.341481 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25c82b0_63e6_47a6_9111_d35760fac0cf.slice/crio-da5d5b0a947c6a9243c681db093449bdeac66bae4b6ac83497359a96faae4571 WatchSource:0}: Error finding container da5d5b0a947c6a9243c681db093449bdeac66bae4b6ac83497359a96faae4571: Status 404 returned error can't find the container with id da5d5b0a947c6a9243c681db093449bdeac66bae4b6ac83497359a96faae4571 Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.396929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410369 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-oauth-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-oauth-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410459 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv484\" (UniqueName: \"kubernetes.io/projected/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-kube-api-access-cv484\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-service-ca\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-trusted-ca-bundle\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.410827 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.457702 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-oauth-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512637 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv484\" (UniqueName: \"kubernetes.io/projected/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-kube-api-access-cv484\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-service-ca\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-trusted-ca-bundle\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.512991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.513016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-oauth-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.513981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-service-ca\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.514716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.515350 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-oauth-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.515721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-trusted-ca-bundle\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.516710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-oauth-config\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.517830 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-console-serving-cert\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.532199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv484\" (UniqueName: \"kubernetes.io/projected/3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e-kube-api-access-cv484\") pod \"console-74fd676476-2bbjn\" (UID: \"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e\") " pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.559918 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982"] Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.613985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.617764 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76fc1cd0-a564-4de3-9a3b-d420615cd640-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-h5hgq\" (UID: \"76fc1cd0-a564-4de3-9a3b-d420615cd640\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.640006 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.840987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74fd676476-2bbjn"] Mar 14 09:12:11 crc kubenswrapper[4687]: W0314 09:12:11.849801 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b8f9abe_5ae0_40a5_afbf_a5b8deb4e01e.slice/crio-2b45d0d6e57508d9bd4a3b29a357973d5963ba21a2fa5504ba6ca66ddacb4fda WatchSource:0}: Error finding container 2b45d0d6e57508d9bd4a3b29a357973d5963ba21a2fa5504ba6ca66ddacb4fda: Status 404 returned error can't find the container with id 2b45d0d6e57508d9bd4a3b29a357973d5963ba21a2fa5504ba6ca66ddacb4fda Mar 14 09:12:11 crc kubenswrapper[4687]: I0314 09:12:11.903455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.041996 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" event={"ID":"ecfaf9cc-3278-4efe-8dcc-050341daded0","Type":"ContainerStarted","Data":"fa0e43543b4cb3326ccbe4ccd0b53f3aabca4bf4c8d1e287d81e9556b6eec6f8"} Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.047482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" event={"ID":"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c","Type":"ContainerStarted","Data":"fa2294efd3517726a2069736c6691f46d8ce9cc5f3891725f21559a20c3af87f"} Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.049095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8q525" event={"ID":"e25c82b0-63e6-47a6-9111-d35760fac0cf","Type":"ContainerStarted","Data":"da5d5b0a947c6a9243c681db093449bdeac66bae4b6ac83497359a96faae4571"} Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.051088 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fd676476-2bbjn" event={"ID":"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e","Type":"ContainerStarted","Data":"8b23c15c558ea6cecce2d397de4634c21c94faf2da71b053340593abac0dd838"} Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.051120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fd676476-2bbjn" event={"ID":"3b8f9abe-5ae0-40a5-afbf-a5b8deb4e01e","Type":"ContainerStarted","Data":"2b45d0d6e57508d9bd4a3b29a357973d5963ba21a2fa5504ba6ca66ddacb4fda"} Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.073066 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74fd676476-2bbjn" podStartSLOduration=1.073041656 podStartE2EDuration="1.073041656s" podCreationTimestamp="2026-03-14 09:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:12:12.072932194 +0000 UTC m=+917.061172579" watchObservedRunningTime="2026-03-14 09:12:12.073041656 +0000 UTC m=+917.061282071" Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.144386 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq"] Mar 14 09:12:12 crc kubenswrapper[4687]: W0314 09:12:12.159482 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76fc1cd0_a564_4de3_9a3b_d420615cd640.slice/crio-bdd9153b98d8d85c024477e77f0b3b0a1bb87560b46ba1cfaecdc88d9619d063 WatchSource:0}: Error finding container bdd9153b98d8d85c024477e77f0b3b0a1bb87560b46ba1cfaecdc88d9619d063: Status 404 returned error can't find the container with id bdd9153b98d8d85c024477e77f0b3b0a1bb87560b46ba1cfaecdc88d9619d063 Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.392014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.441317 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:12 crc kubenswrapper[4687]: I0314 09:12:12.635743 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:12:13 crc kubenswrapper[4687]: I0314 09:12:13.057750 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" event={"ID":"76fc1cd0-a564-4de3-9a3b-d420615cd640","Type":"ContainerStarted","Data":"bdd9153b98d8d85c024477e77f0b3b0a1bb87560b46ba1cfaecdc88d9619d063"} Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.062484 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7b5vb" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="registry-server" containerID="cri-o://2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea" gracePeriod=2 Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.447401 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.550717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtjq8\" (UniqueName: \"kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8\") pod \"b069ddfe-9212-46ff-a05d-b91a0394e133\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.550820 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content\") pod \"b069ddfe-9212-46ff-a05d-b91a0394e133\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.550847 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities\") pod \"b069ddfe-9212-46ff-a05d-b91a0394e133\" (UID: \"b069ddfe-9212-46ff-a05d-b91a0394e133\") " Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.551764 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities" (OuterVolumeSpecName: "utilities") pod "b069ddfe-9212-46ff-a05d-b91a0394e133" (UID: "b069ddfe-9212-46ff-a05d-b91a0394e133"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.555671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8" (OuterVolumeSpecName: "kube-api-access-gtjq8") pod "b069ddfe-9212-46ff-a05d-b91a0394e133" (UID: "b069ddfe-9212-46ff-a05d-b91a0394e133"). InnerVolumeSpecName "kube-api-access-gtjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.652076 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtjq8\" (UniqueName: \"kubernetes.io/projected/b069ddfe-9212-46ff-a05d-b91a0394e133-kube-api-access-gtjq8\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.652115 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.691904 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b069ddfe-9212-46ff-a05d-b91a0394e133" (UID: "b069ddfe-9212-46ff-a05d-b91a0394e133"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:14 crc kubenswrapper[4687]: I0314 09:12:14.753302 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b069ddfe-9212-46ff-a05d-b91a0394e133-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.071446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" event={"ID":"ecfaf9cc-3278-4efe-8dcc-050341daded0","Type":"ContainerStarted","Data":"0dafb18574866fd7529f34c9843769b0eb74c1cf8e1528d002fc4b12bdbd22bf"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.074551 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" event={"ID":"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c","Type":"ContainerStarted","Data":"802f5c6fb926eddfceab4fa804d83bfa5dd0ebc1eb53b7b51866cc2508f5ab14"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.076618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" event={"ID":"76fc1cd0-a564-4de3-9a3b-d420615cd640","Type":"ContainerStarted","Data":"51a4c11f0a0a04814c9c9d7015581705cee35d090fbc9d1aeecb9b43a941e351"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.079823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8q525" event={"ID":"e25c82b0-63e6-47a6-9111-d35760fac0cf","Type":"ContainerStarted","Data":"9855231d58861756b518ba03cfe7db7678c94d6e10f9eb2953fe291f338d58aa"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.081783 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.083042 4687 generic.go:334] "Generic (PLEG): container finished" podID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerID="2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea" exitCode=0 Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.083095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerDied","Data":"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.083121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b5vb" event={"ID":"b069ddfe-9212-46ff-a05d-b91a0394e133","Type":"ContainerDied","Data":"4e9837957948720af540cdc5db823397f9055ae0000c561e33c1bfaad354a354"} Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.083142 4687 scope.go:117] "RemoveContainer" containerID="2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.083140 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b5vb" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.109587 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nd982" podStartSLOduration=1.5461465049999998 podStartE2EDuration="4.109570502s" podCreationTimestamp="2026-03-14 09:12:11 +0000 UTC" firstStartedPulling="2026-03-14 09:12:11.569190667 +0000 UTC m=+916.557431042" lastFinishedPulling="2026-03-14 09:12:14.132614664 +0000 UTC m=+919.120855039" observedRunningTime="2026-03-14 09:12:15.09128666 +0000 UTC m=+920.079527045" watchObservedRunningTime="2026-03-14 09:12:15.109570502 +0000 UTC m=+920.097810877" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.112355 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8q525" podStartSLOduration=2.312494564 podStartE2EDuration="5.112344651s" podCreationTimestamp="2026-03-14 09:12:10 +0000 UTC" firstStartedPulling="2026-03-14 09:12:11.343029 +0000 UTC m=+916.331269375" lastFinishedPulling="2026-03-14 09:12:14.142879087 +0000 UTC m=+919.131119462" observedRunningTime="2026-03-14 09:12:15.107707526 +0000 UTC m=+920.095947911" watchObservedRunningTime="2026-03-14 09:12:15.112344651 +0000 UTC m=+920.100585026" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.135742 4687 scope.go:117] "RemoveContainer" containerID="6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.137150 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" podStartSLOduration=3.156770684 podStartE2EDuration="5.137127523s" podCreationTimestamp="2026-03-14 09:12:10 +0000 UTC" firstStartedPulling="2026-03-14 09:12:12.162536448 +0000 UTC m=+917.150776823" lastFinishedPulling="2026-03-14 09:12:14.142893277 +0000 UTC m=+919.131133662" observedRunningTime="2026-03-14 09:12:15.131260708 +0000 UTC m=+920.119501103" watchObservedRunningTime="2026-03-14 09:12:15.137127523 +0000 UTC m=+920.125367908" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.157386 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.164145 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7b5vb"] Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.171797 4687 scope.go:117] "RemoveContainer" containerID="7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.200215 4687 scope.go:117] "RemoveContainer" containerID="2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea" Mar 14 09:12:15 crc kubenswrapper[4687]: E0314 09:12:15.200673 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea\": container with ID starting with 2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea not found: ID does not exist" containerID="2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.200707 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea"} err="failed to get container status \"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea\": rpc error: code = NotFound desc = could not find container \"2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea\": container with ID starting with 2c79a15987cf90ac7216b98e1be95809bcd92bf7023136052fa7c0f7dc1dbcea not found: ID does not exist" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.200728 4687 scope.go:117] "RemoveContainer" containerID="6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9" Mar 14 09:12:15 crc kubenswrapper[4687]: E0314 09:12:15.201153 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9\": container with ID starting with 6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9 not found: ID does not exist" containerID="6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.201177 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9"} err="failed to get container status \"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9\": rpc error: code = NotFound desc = could not find container \"6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9\": container with ID starting with 6d68239ff38b72747ff0bc209debffe2999fad7b066d64e1dbdcea4a22c541e9 not found: ID does not exist" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.201190 4687 scope.go:117] "RemoveContainer" containerID="7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af" Mar 14 09:12:15 crc kubenswrapper[4687]: E0314 09:12:15.201408 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af\": container with ID starting with 7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af not found: ID does not exist" containerID="7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.201427 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af"} err="failed to get container status \"7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af\": rpc error: code = NotFound desc = could not find container \"7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af\": container with ID starting with 7875c4e594966009be38208a3b5ccd4eee21befdcf00630fa1df57b1249f14af not found: ID does not exist" Mar 14 09:12:15 crc kubenswrapper[4687]: I0314 09:12:15.748038 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" path="/var/lib/kubelet/pods/b069ddfe-9212-46ff-a05d-b91a0394e133/volumes" Mar 14 09:12:16 crc kubenswrapper[4687]: I0314 09:12:16.090610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:18 crc kubenswrapper[4687]: I0314 09:12:18.122585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" event={"ID":"cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c","Type":"ContainerStarted","Data":"16345e031185625fa4fe06f623ba7a8cc92994793ce9a19687ebcc7a63c1dc08"} Mar 14 09:12:18 crc kubenswrapper[4687]: I0314 09:12:18.145085 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-klcqb" podStartSLOduration=2.48075246 podStartE2EDuration="8.145067643s" podCreationTimestamp="2026-03-14 09:12:10 +0000 UTC" firstStartedPulling="2026-03-14 09:12:11.472934759 +0000 UTC m=+916.461175134" lastFinishedPulling="2026-03-14 09:12:17.137249942 +0000 UTC m=+922.125490317" observedRunningTime="2026-03-14 09:12:18.141505315 +0000 UTC m=+923.129745690" watchObservedRunningTime="2026-03-14 09:12:18.145067643 +0000 UTC m=+923.133308018" Mar 14 09:12:21 crc kubenswrapper[4687]: I0314 09:12:21.358417 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8q525" Mar 14 09:12:21 crc kubenswrapper[4687]: I0314 09:12:21.640946 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:21 crc kubenswrapper[4687]: I0314 09:12:21.641422 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:21 crc kubenswrapper[4687]: I0314 09:12:21.646493 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:22 crc kubenswrapper[4687]: I0314 09:12:22.158890 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74fd676476-2bbjn" Mar 14 09:12:22 crc kubenswrapper[4687]: I0314 09:12:22.220290 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:12:31 crc kubenswrapper[4687]: I0314 09:12:31.910878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-h5hgq" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.824780 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx"] Mar 14 09:12:46 crc kubenswrapper[4687]: E0314 09:12:46.825520 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="extract-utilities" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.825535 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="extract-utilities" Mar 14 09:12:46 crc kubenswrapper[4687]: E0314 09:12:46.825550 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="registry-server" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.825557 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="registry-server" Mar 14 09:12:46 crc kubenswrapper[4687]: E0314 09:12:46.825567 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="extract-content" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.825574 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="extract-content" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.825690 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b069ddfe-9212-46ff-a05d-b91a0394e133" containerName="registry-server" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.826692 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.829012 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.836969 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx"] Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.895461 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jq7f\" (UniqueName: \"kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.895532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.895553 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.997107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.997197 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.997308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jq7f\" (UniqueName: \"kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.997700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:46 crc kubenswrapper[4687]: I0314 09:12:46.997748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.023915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jq7f\" (UniqueName: \"kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.148174 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.265854 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4bm6l" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" containerID="cri-o://85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82" gracePeriod=15 Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.567999 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx"] Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.620756 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bm6l_407cd8a9-1364-412b-9d41-7c66fc18bd5e/console/0.log" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.620834 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708313 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708412 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708431 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh72q\" (UniqueName: \"kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708477 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.708557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert\") pod \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\" (UID: \"407cd8a9-1364-412b-9d41-7c66fc18bd5e\") " Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.709373 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.709392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.709437 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca" (OuterVolumeSpecName: "service-ca") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.709485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config" (OuterVolumeSpecName: "console-config") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.713498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.713574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q" (OuterVolumeSpecName: "kube-api-access-jh72q") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "kube-api-access-jh72q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.713718 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "407cd8a9-1364-412b-9d41-7c66fc18bd5e" (UID: "407cd8a9-1364-412b-9d41-7c66fc18bd5e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810208 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810261 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810273 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810285 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh72q\" (UniqueName: \"kubernetes.io/projected/407cd8a9-1364-412b-9d41-7c66fc18bd5e-kube-api-access-jh72q\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810297 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810307 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/407cd8a9-1364-412b-9d41-7c66fc18bd5e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:47 crc kubenswrapper[4687]: I0314 09:12:47.810317 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/407cd8a9-1364-412b-9d41-7c66fc18bd5e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324062 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bm6l_407cd8a9-1364-412b-9d41-7c66fc18bd5e/console/0.log" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324463 4687 generic.go:334] "Generic (PLEG): container finished" podID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerID="85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82" exitCode=2 Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324528 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bm6l" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bm6l" event={"ID":"407cd8a9-1364-412b-9d41-7c66fc18bd5e","Type":"ContainerDied","Data":"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82"} Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324623 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bm6l" event={"ID":"407cd8a9-1364-412b-9d41-7c66fc18bd5e","Type":"ContainerDied","Data":"57f6d4d3e919c55661c7d577cdc2ae7cfa1a939289a8e40f77e5e448093372d7"} Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.324647 4687 scope.go:117] "RemoveContainer" containerID="85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.326464 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerID="88b82b551bdca56dc4c268942fa29555e9434c4a26213a4f46e6114222dc8bca" exitCode=0 Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.326503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerDied","Data":"88b82b551bdca56dc4c268942fa29555e9434c4a26213a4f46e6114222dc8bca"} Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.326540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerStarted","Data":"2a6e597b9964dd661345c74778f0b397226d06b7cf7c9e3b973fbcfd08482c12"} Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.345082 4687 scope.go:117] "RemoveContainer" containerID="85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82" Mar 14 09:12:48 crc kubenswrapper[4687]: E0314 09:12:48.345944 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82\": container with ID starting with 85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82 not found: ID does not exist" containerID="85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.345990 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82"} err="failed to get container status \"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82\": rpc error: code = NotFound desc = could not find container \"85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82\": container with ID starting with 85e9e6a291809fd6b5a87309f9fce3ad2a9511098198108a318a6d79ccdf2a82 not found: ID does not exist" Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.359127 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:12:48 crc kubenswrapper[4687]: I0314 09:12:48.364214 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4bm6l"] Mar 14 09:12:49 crc kubenswrapper[4687]: I0314 09:12:49.745776 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" path="/var/lib/kubelet/pods/407cd8a9-1364-412b-9d41-7c66fc18bd5e/volumes" Mar 14 09:12:50 crc kubenswrapper[4687]: I0314 09:12:50.342028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerStarted","Data":"b95d30045623973c75e79791f0ffa857195ed6a81b64a63f6e88c8f91dc2a1e5"} Mar 14 09:12:51 crc kubenswrapper[4687]: I0314 09:12:51.352163 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerID="b95d30045623973c75e79791f0ffa857195ed6a81b64a63f6e88c8f91dc2a1e5" exitCode=0 Mar 14 09:12:51 crc kubenswrapper[4687]: I0314 09:12:51.352220 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerDied","Data":"b95d30045623973c75e79791f0ffa857195ed6a81b64a63f6e88c8f91dc2a1e5"} Mar 14 09:12:52 crc kubenswrapper[4687]: I0314 09:12:52.360000 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerID="ceaed1c10909f8842cb094c98bbc928ede6064678cd19cecca3259540df06790" exitCode=0 Mar 14 09:12:52 crc kubenswrapper[4687]: I0314 09:12:52.360064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerDied","Data":"ceaed1c10909f8842cb094c98bbc928ede6064678cd19cecca3259540df06790"} Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.602105 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.680588 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jq7f\" (UniqueName: \"kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f\") pod \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.680627 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util\") pod \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.680686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle\") pod \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\" (UID: \"4f31a3c2-0deb-4826-ab7e-0da7a8091f19\") " Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.681686 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle" (OuterVolumeSpecName: "bundle") pod "4f31a3c2-0deb-4826-ab7e-0da7a8091f19" (UID: "4f31a3c2-0deb-4826-ab7e-0da7a8091f19"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.690461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f" (OuterVolumeSpecName: "kube-api-access-9jq7f") pod "4f31a3c2-0deb-4826-ab7e-0da7a8091f19" (UID: "4f31a3c2-0deb-4826-ab7e-0da7a8091f19"). InnerVolumeSpecName "kube-api-access-9jq7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.782394 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jq7f\" (UniqueName: \"kubernetes.io/projected/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-kube-api-access-9jq7f\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.782425 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:53 crc kubenswrapper[4687]: I0314 09:12:53.991435 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util" (OuterVolumeSpecName: "util") pod "4f31a3c2-0deb-4826-ab7e-0da7a8091f19" (UID: "4f31a3c2-0deb-4826-ab7e-0da7a8091f19"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:54 crc kubenswrapper[4687]: I0314 09:12:54.087066 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f31a3c2-0deb-4826-ab7e-0da7a8091f19-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:54 crc kubenswrapper[4687]: I0314 09:12:54.374254 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" event={"ID":"4f31a3c2-0deb-4826-ab7e-0da7a8091f19","Type":"ContainerDied","Data":"2a6e597b9964dd661345c74778f0b397226d06b7cf7c9e3b973fbcfd08482c12"} Mar 14 09:12:54 crc kubenswrapper[4687]: I0314 09:12:54.374295 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a6e597b9964dd661345c74778f0b397226d06b7cf7c9e3b973fbcfd08482c12" Mar 14 09:12:54 crc kubenswrapper[4687]: I0314 09:12:54.374363 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx" Mar 14 09:12:58 crc kubenswrapper[4687]: I0314 09:12:58.002474 4687 scope.go:117] "RemoveContainer" containerID="10a0c41f3ba3501f71eb365070ba9a695d5d9842227e624c309db1300b33b10b" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.848099 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v"] Mar 14 09:13:02 crc kubenswrapper[4687]: E0314 09:13:02.849993 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850081 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" Mar 14 09:13:02 crc kubenswrapper[4687]: E0314 09:13:02.850145 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="pull" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850205 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="pull" Mar 14 09:13:02 crc kubenswrapper[4687]: E0314 09:13:02.850266 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="util" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850346 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="util" Mar 14 09:13:02 crc kubenswrapper[4687]: E0314 09:13:02.850423 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="extract" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850484 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="extract" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850673 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f31a3c2-0deb-4826-ab7e-0da7a8091f19" containerName="extract" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.850749 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="407cd8a9-1364-412b-9d41-7c66fc18bd5e" containerName="console" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.851365 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.861490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.862566 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.864394 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fgc2h" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.864734 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.864901 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.879603 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v"] Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.902383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-apiservice-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.902436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksgq\" (UniqueName: \"kubernetes.io/projected/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-kube-api-access-9ksgq\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:02 crc kubenswrapper[4687]: I0314 09:13:02.902490 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-webhook-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.003296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-apiservice-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.003387 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksgq\" (UniqueName: \"kubernetes.io/projected/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-kube-api-access-9ksgq\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.003432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-webhook-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.011104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-apiservice-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.013024 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-webhook-cert\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.035198 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksgq\" (UniqueName: \"kubernetes.io/projected/cacb7e09-cbb9-4e89-a898-e6da0f4498b5-kube-api-access-9ksgq\") pod \"metallb-operator-controller-manager-74fd5dfb9c-pvx5v\" (UID: \"cacb7e09-cbb9-4e89-a898-e6da0f4498b5\") " pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.167091 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.177748 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg"] Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.178470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.202102 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.202691 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qf2rw" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.203060 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.205498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwmq2\" (UniqueName: \"kubernetes.io/projected/77588bd6-38b1-4f43-a701-437cf2c3df99-kube-api-access-qwmq2\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.205561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-apiservice-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.205599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-webhook-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.211740 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg"] Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.306734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwmq2\" (UniqueName: \"kubernetes.io/projected/77588bd6-38b1-4f43-a701-437cf2c3df99-kube-api-access-qwmq2\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.307082 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-apiservice-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.307116 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-webhook-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.312015 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-webhook-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.315225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77588bd6-38b1-4f43-a701-437cf2c3df99-apiservice-cert\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.337032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwmq2\" (UniqueName: \"kubernetes.io/projected/77588bd6-38b1-4f43-a701-437cf2c3df99-kube-api-access-qwmq2\") pod \"metallb-operator-webhook-server-6464f7b86-mdwhg\" (UID: \"77588bd6-38b1-4f43-a701-437cf2c3df99\") " pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.410759 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v"] Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.457896 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" event={"ID":"cacb7e09-cbb9-4e89-a898-e6da0f4498b5","Type":"ContainerStarted","Data":"bbf26cd914c9d8a74d04237eabf34efc11e382ef661256101c5dc9d94d127fa1"} Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.544146 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:03 crc kubenswrapper[4687]: I0314 09:13:03.731815 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg"] Mar 14 09:13:03 crc kubenswrapper[4687]: W0314 09:13:03.739648 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77588bd6_38b1_4f43_a701_437cf2c3df99.slice/crio-bd9a1728c1a0ae631e0cfbf8656fc0546a1fe624050bba93d14d688161e51073 WatchSource:0}: Error finding container bd9a1728c1a0ae631e0cfbf8656fc0546a1fe624050bba93d14d688161e51073: Status 404 returned error can't find the container with id bd9a1728c1a0ae631e0cfbf8656fc0546a1fe624050bba93d14d688161e51073 Mar 14 09:13:04 crc kubenswrapper[4687]: I0314 09:13:04.465608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" event={"ID":"77588bd6-38b1-4f43-a701-437cf2c3df99","Type":"ContainerStarted","Data":"bd9a1728c1a0ae631e0cfbf8656fc0546a1fe624050bba93d14d688161e51073"} Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.504753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" event={"ID":"cacb7e09-cbb9-4e89-a898-e6da0f4498b5","Type":"ContainerStarted","Data":"07f6b5ad47ffacd7c21e0ea84faed918a4a8351638aecd3c59e5d05b84acb2ea"} Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.505314 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.506669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" event={"ID":"77588bd6-38b1-4f43-a701-437cf2c3df99","Type":"ContainerStarted","Data":"38b9b242938a51896c62122d29b9804e3a004bbe26624226ee5ce2f9bf784f27"} Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.506758 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.534077 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" podStartSLOduration=2.142377405 podStartE2EDuration="7.534060102s" podCreationTimestamp="2026-03-14 09:13:02 +0000 UTC" firstStartedPulling="2026-03-14 09:13:03.440993434 +0000 UTC m=+968.429233809" lastFinishedPulling="2026-03-14 09:13:08.832676131 +0000 UTC m=+973.820916506" observedRunningTime="2026-03-14 09:13:09.533919398 +0000 UTC m=+974.522159783" watchObservedRunningTime="2026-03-14 09:13:09.534060102 +0000 UTC m=+974.522300477" Mar 14 09:13:09 crc kubenswrapper[4687]: I0314 09:13:09.560984 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" podStartSLOduration=1.462325482 podStartE2EDuration="6.560963358s" podCreationTimestamp="2026-03-14 09:13:03 +0000 UTC" firstStartedPulling="2026-03-14 09:13:03.743207051 +0000 UTC m=+968.731447416" lastFinishedPulling="2026-03-14 09:13:08.841844917 +0000 UTC m=+973.830085292" observedRunningTime="2026-03-14 09:13:09.5566048 +0000 UTC m=+974.544845185" watchObservedRunningTime="2026-03-14 09:13:09.560963358 +0000 UTC m=+974.549203733" Mar 14 09:13:23 crc kubenswrapper[4687]: I0314 09:13:23.547547 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6464f7b86-mdwhg" Mar 14 09:13:24 crc kubenswrapper[4687]: I0314 09:13:24.111490 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:13:24 crc kubenswrapper[4687]: I0314 09:13:24.111555 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.170241 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74fd5dfb9c-pvx5v" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.940836 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pg6ns"] Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.944053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.946976 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx"] Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.947556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.947779 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qbmnc" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.948059 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.948270 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.948465 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.960019 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx"] Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.991718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-sockets\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.991995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxkk\" (UniqueName: \"kubernetes.io/projected/90888982-f2a2-46f1-a099-05070e93b427-kube-api-access-mwxkk\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/90888982-f2a2-46f1-a099-05070e93b427-frr-startup\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-conf\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfpg\" (UniqueName: \"kubernetes.io/projected/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-kube-api-access-hvfpg\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992687 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-metrics\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:43 crc kubenswrapper[4687]: I0314 09:13:43.992753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-reloader\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.033999 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6qhvs"] Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.034916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.039703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.040141 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.040527 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.040935 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-trbh6" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.068966 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-hdk5z"] Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.070230 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.072248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.078010 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-hdk5z"] Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094133 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-metrics\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-reloader\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094212 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metrics-certs\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-sockets\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxkk\" (UniqueName: \"kubernetes.io/projected/90888982-f2a2-46f1-a099-05070e93b427-kube-api-access-mwxkk\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094294 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/90888982-f2a2-46f1-a099-05070e93b427-frr-startup\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-conf\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094361 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfpg\" (UniqueName: \"kubernetes.io/projected/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-kube-api-access-hvfpg\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094384 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metallb-excludel2\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094401 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.094418 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bl2k\" (UniqueName: \"kubernetes.io/projected/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-kube-api-access-2bl2k\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.094458 4687 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.094506 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs podName:90888982-f2a2-46f1-a099-05070e93b427 nodeName:}" failed. No retries permitted until 2026-03-14 09:13:44.594490038 +0000 UTC m=+1009.582730413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs") pod "frr-k8s-pg6ns" (UID: "90888982-f2a2-46f1-a099-05070e93b427") : secret "frr-k8s-certs-secret" not found Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.095146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-reloader\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.095193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-sockets\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.095315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-metrics\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.095627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/90888982-f2a2-46f1-a099-05070e93b427-frr-startup\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.095785 4687 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.095919 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert podName:30d31ea4-3f62-4a7d-9c0c-162d87bab38a nodeName:}" failed. No retries permitted until 2026-03-14 09:13:44.595898823 +0000 UTC m=+1009.584139198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert") pod "frr-k8s-webhook-server-bcc4b6f68-dw5bx" (UID: "30d31ea4-3f62-4a7d-9c0c-162d87bab38a") : secret "frr-k8s-webhook-server-cert" not found Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.097777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/90888982-f2a2-46f1-a099-05070e93b427-frr-conf\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.113953 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxkk\" (UniqueName: \"kubernetes.io/projected/90888982-f2a2-46f1-a099-05070e93b427-kube-api-access-mwxkk\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.116135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfpg\" (UniqueName: \"kubernetes.io/projected/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-kube-api-access-hvfpg\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.195986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-cert\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.197225 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2gc\" (UniqueName: \"kubernetes.io/projected/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-kube-api-access-pr2gc\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.197603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metallb-excludel2\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.197784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bl2k\" (UniqueName: \"kubernetes.io/projected/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-kube-api-access-2bl2k\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.197935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-metrics-certs\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.198072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metrics-certs\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.198166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.198251 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metallb-excludel2\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.198407 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.198457 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist podName:b69e2289-9be3-45bd-bcea-89dddbc5e1c2 nodeName:}" failed. No retries permitted until 2026-03-14 09:13:44.698441461 +0000 UTC m=+1009.686681826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist") pod "speaker-6qhvs" (UID: "b69e2289-9be3-45bd-bcea-89dddbc5e1c2") : secret "metallb-memberlist" not found Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.202613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-metrics-certs\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.212691 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bl2k\" (UniqueName: \"kubernetes.io/projected/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-kube-api-access-2bl2k\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.299258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-metrics-certs\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.299799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-cert\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.299898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2gc\" (UniqueName: \"kubernetes.io/projected/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-kube-api-access-pr2gc\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.303344 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-cert\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.303976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-metrics-certs\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.321241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2gc\" (UniqueName: \"kubernetes.io/projected/b20ebc9c-3733-4824-bd5a-6b1e6dc1265a-kube-api-access-pr2gc\") pod \"controller-7bb4cc7c98-hdk5z\" (UID: \"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a\") " pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.395916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.604092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.604218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.607431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90888982-f2a2-46f1-a099-05070e93b427-metrics-certs\") pod \"frr-k8s-pg6ns\" (UID: \"90888982-f2a2-46f1-a099-05070e93b427\") " pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.607540 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30d31ea4-3f62-4a7d-9c0c-162d87bab38a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-dw5bx\" (UID: \"30d31ea4-3f62-4a7d-9c0c-162d87bab38a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.705489 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.705699 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 09:13:44 crc kubenswrapper[4687]: E0314 09:13:44.705787 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist podName:b69e2289-9be3-45bd-bcea-89dddbc5e1c2 nodeName:}" failed. No retries permitted until 2026-03-14 09:13:45.7057664 +0000 UTC m=+1010.694006775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist") pod "speaker-6qhvs" (UID: "b69e2289-9be3-45bd-bcea-89dddbc5e1c2") : secret "metallb-memberlist" not found Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.811634 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-hdk5z"] Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.868669 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:44 crc kubenswrapper[4687]: I0314 09:13:44.885780 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.338494 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx"] Mar 14 09:13:45 crc kubenswrapper[4687]: W0314 09:13:45.341949 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30d31ea4_3f62_4a7d_9c0c_162d87bab38a.slice/crio-5ddf457cb1a7989a3f8100d8c387e48df99265220ae074db7fa671287c8345d2 WatchSource:0}: Error finding container 5ddf457cb1a7989a3f8100d8c387e48df99265220ae074db7fa671287c8345d2: Status 404 returned error can't find the container with id 5ddf457cb1a7989a3f8100d8c387e48df99265220ae074db7fa671287c8345d2 Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.721866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.727308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b69e2289-9be3-45bd-bcea-89dddbc5e1c2-memberlist\") pod \"speaker-6qhvs\" (UID: \"b69e2289-9be3-45bd-bcea-89dddbc5e1c2\") " pod="metallb-system/speaker-6qhvs" Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.746633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"c7913341f95288552b7c4bdaccdb0424ca808ccdd7d13aa51e071466915142cf"} Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.746672 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hdk5z" event={"ID":"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a","Type":"ContainerStarted","Data":"99ff07e422a9793c20773c6395b8d64f39a7b1f54ebb92b8801f5ba7b8baa461"} Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.746687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hdk5z" event={"ID":"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a","Type":"ContainerStarted","Data":"af5d55e7892f50c31d017429968268d98a318aca2f6b99bb02f716a7ba681247"} Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.746701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hdk5z" event={"ID":"b20ebc9c-3733-4824-bd5a-6b1e6dc1265a","Type":"ContainerStarted","Data":"884fef09eb35c6b4992f8bf83adb7d5affac0c94f18ece9011e2dce8032b4ea9"} Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.746750 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.749864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" event={"ID":"30d31ea4-3f62-4a7d-9c0c-162d87bab38a","Type":"ContainerStarted","Data":"5ddf457cb1a7989a3f8100d8c387e48df99265220ae074db7fa671287c8345d2"} Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.790630 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-hdk5z" podStartSLOduration=1.790609303 podStartE2EDuration="1.790609303s" podCreationTimestamp="2026-03-14 09:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:13:45.78721738 +0000 UTC m=+1010.775457755" watchObservedRunningTime="2026-03-14 09:13:45.790609303 +0000 UTC m=+1010.778849688" Mar 14 09:13:45 crc kubenswrapper[4687]: I0314 09:13:45.860857 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6qhvs" Mar 14 09:13:46 crc kubenswrapper[4687]: I0314 09:13:46.759185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6qhvs" event={"ID":"b69e2289-9be3-45bd-bcea-89dddbc5e1c2","Type":"ContainerStarted","Data":"f67b9d861e50eae3199432a34cd00a93c0b7987af64407a4aef91b89ae120cad"} Mar 14 09:13:46 crc kubenswrapper[4687]: I0314 09:13:46.759532 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6qhvs" event={"ID":"b69e2289-9be3-45bd-bcea-89dddbc5e1c2","Type":"ContainerStarted","Data":"b0291abb98f2f2349904a080a1920a11a36957d8dd6dbfc7db9edd723cb908a2"} Mar 14 09:13:46 crc kubenswrapper[4687]: I0314 09:13:46.759547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6qhvs" event={"ID":"b69e2289-9be3-45bd-bcea-89dddbc5e1c2","Type":"ContainerStarted","Data":"958227b08312ffef7e1d7801cca69f20d1ea0dead4252e087b5a4a66a28b4a95"} Mar 14 09:13:46 crc kubenswrapper[4687]: I0314 09:13:46.759721 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6qhvs" Mar 14 09:13:46 crc kubenswrapper[4687]: I0314 09:13:46.804644 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6qhvs" podStartSLOduration=2.804621085 podStartE2EDuration="2.804621085s" podCreationTimestamp="2026-03-14 09:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:13:46.797623171 +0000 UTC m=+1011.785863566" watchObservedRunningTime="2026-03-14 09:13:46.804621085 +0000 UTC m=+1011.792861480" Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.876590 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.878560 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.885410 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.992292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.992601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:50 crc kubenswrapper[4687]: I0314 09:13:50.992696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498mt\" (UniqueName: \"kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.094198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.094251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498mt\" (UniqueName: \"kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.094293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.095027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.095084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.112922 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498mt\" (UniqueName: \"kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt\") pod \"certified-operators-gw4k6\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:51 crc kubenswrapper[4687]: I0314 09:13:51.201489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:13:53 crc kubenswrapper[4687]: I0314 09:13:53.234278 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.111718 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.112069 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.807045 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" event={"ID":"30d31ea4-3f62-4a7d-9c0c-162d87bab38a","Type":"ContainerStarted","Data":"ef3a4712e333a23be888a0b0b73e4cc1eff2bf249d1c523fb7ec86f9cb8fa262"} Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.807482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.809735 4687 generic.go:334] "Generic (PLEG): container finished" podID="90888982-f2a2-46f1-a099-05070e93b427" containerID="60281f76cd87774c8aa68883538ae536877a9888afe77a564fe65fbd66480f79" exitCode=0 Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.809816 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerDied","Data":"60281f76cd87774c8aa68883538ae536877a9888afe77a564fe65fbd66480f79"} Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.811599 4687 generic.go:334] "Generic (PLEG): container finished" podID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerID="2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a" exitCode=0 Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.811660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerDied","Data":"2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a"} Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.811696 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerStarted","Data":"1e43b1e9e75d6361d74f53b088eb3e0942aa95910544d789d784592830c956d5"} Mar 14 09:13:54 crc kubenswrapper[4687]: I0314 09:13:54.826739 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" podStartSLOduration=2.795648398 podStartE2EDuration="11.826718193s" podCreationTimestamp="2026-03-14 09:13:43 +0000 UTC" firstStartedPulling="2026-03-14 09:13:45.344175313 +0000 UTC m=+1010.332415688" lastFinishedPulling="2026-03-14 09:13:54.375245088 +0000 UTC m=+1019.363485483" observedRunningTime="2026-03-14 09:13:54.825613296 +0000 UTC m=+1019.813853681" watchObservedRunningTime="2026-03-14 09:13:54.826718193 +0000 UTC m=+1019.814958568" Mar 14 09:13:55 crc kubenswrapper[4687]: I0314 09:13:55.819388 4687 generic.go:334] "Generic (PLEG): container finished" podID="90888982-f2a2-46f1-a099-05070e93b427" containerID="2d233cb65fe29f23951af92244f8e401a93d9191b8199d243cf71f95a08f9e73" exitCode=0 Mar 14 09:13:55 crc kubenswrapper[4687]: I0314 09:13:55.819751 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerDied","Data":"2d233cb65fe29f23951af92244f8e401a93d9191b8199d243cf71f95a08f9e73"} Mar 14 09:13:55 crc kubenswrapper[4687]: I0314 09:13:55.824883 4687 generic.go:334] "Generic (PLEG): container finished" podID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerID="6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057" exitCode=0 Mar 14 09:13:55 crc kubenswrapper[4687]: I0314 09:13:55.824994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerDied","Data":"6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057"} Mar 14 09:13:56 crc kubenswrapper[4687]: I0314 09:13:56.832237 4687 generic.go:334] "Generic (PLEG): container finished" podID="90888982-f2a2-46f1-a099-05070e93b427" containerID="4353b9e82cfc0770648380a573df7c2ca6d65aca056593a9a2d371d567b50138" exitCode=0 Mar 14 09:13:56 crc kubenswrapper[4687]: I0314 09:13:56.832368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerDied","Data":"4353b9e82cfc0770648380a573df7c2ca6d65aca056593a9a2d371d567b50138"} Mar 14 09:13:56 crc kubenswrapper[4687]: I0314 09:13:56.835353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerStarted","Data":"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893"} Mar 14 09:13:56 crc kubenswrapper[4687]: I0314 09:13:56.876166 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gw4k6" podStartSLOduration=5.43940649 podStartE2EDuration="6.876146565s" podCreationTimestamp="2026-03-14 09:13:50 +0000 UTC" firstStartedPulling="2026-03-14 09:13:54.812789739 +0000 UTC m=+1019.801030154" lastFinishedPulling="2026-03-14 09:13:56.249529844 +0000 UTC m=+1021.237770229" observedRunningTime="2026-03-14 09:13:56.873022618 +0000 UTC m=+1021.861263003" watchObservedRunningTime="2026-03-14 09:13:56.876146565 +0000 UTC m=+1021.864386940" Mar 14 09:13:57 crc kubenswrapper[4687]: I0314 09:13:57.871241 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"71132a85963d44c3fcef884a1c1eb1156878be850265785d19c732da427b3b7f"} Mar 14 09:13:57 crc kubenswrapper[4687]: I0314 09:13:57.871564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"489833ae587a30d68205a7929d7b2f11c14d11cc2c61b2eb415f51d498af6207"} Mar 14 09:13:57 crc kubenswrapper[4687]: I0314 09:13:57.871576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"7ae196e132bb69d14528f5e5e461a1341cb9bc89ecd87c7986710ab13757dd6e"} Mar 14 09:13:57 crc kubenswrapper[4687]: I0314 09:13:57.871584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"c4f44d768fe619bf0212b687271288a61775520bb35b7c13fc37d0ca82607e98"} Mar 14 09:13:57 crc kubenswrapper[4687]: I0314 09:13:57.871592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"6dd7af866b605622884f84125f6bcaa7a5b0bf9954298d3a1526798ff86cc721"} Mar 14 09:13:58 crc kubenswrapper[4687]: I0314 09:13:58.878974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pg6ns" event={"ID":"90888982-f2a2-46f1-a099-05070e93b427","Type":"ContainerStarted","Data":"fc54d43105cbd308fc968bfc9055622f92e8cfd5f5b829cf26047792fa1182a3"} Mar 14 09:13:58 crc kubenswrapper[4687]: I0314 09:13:58.879256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:58 crc kubenswrapper[4687]: I0314 09:13:58.902251 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pg6ns" podStartSLOduration=6.680233558 podStartE2EDuration="15.902232699s" podCreationTimestamp="2026-03-14 09:13:43 +0000 UTC" firstStartedPulling="2026-03-14 09:13:45.131787845 +0000 UTC m=+1010.120028220" lastFinishedPulling="2026-03-14 09:13:54.353786966 +0000 UTC m=+1019.342027361" observedRunningTime="2026-03-14 09:13:58.901261765 +0000 UTC m=+1023.889502140" watchObservedRunningTime="2026-03-14 09:13:58.902232699 +0000 UTC m=+1023.890473074" Mar 14 09:13:59 crc kubenswrapper[4687]: I0314 09:13:59.869855 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:13:59 crc kubenswrapper[4687]: I0314 09:13:59.904903 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.125454 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557994-dnlms"] Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.126442 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.128993 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.129598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.129800 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.140525 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-dnlms"] Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.210836 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98rw\" (UniqueName: \"kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw\") pod \"auto-csr-approver-29557994-dnlms\" (UID: \"c48f6277-8485-4374-858f-43ddb712771a\") " pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.312754 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r98rw\" (UniqueName: \"kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw\") pod \"auto-csr-approver-29557994-dnlms\" (UID: \"c48f6277-8485-4374-858f-43ddb712771a\") " pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.334667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98rw\" (UniqueName: \"kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw\") pod \"auto-csr-approver-29557994-dnlms\" (UID: \"c48f6277-8485-4374-858f-43ddb712771a\") " pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.442917 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.689050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-dnlms"] Mar 14 09:14:00 crc kubenswrapper[4687]: I0314 09:14:00.891503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-dnlms" event={"ID":"c48f6277-8485-4374-858f-43ddb712771a","Type":"ContainerStarted","Data":"7944e5d2dd862b1e7ca882836670431fd84828664d13c4e0c80000137b2b83f1"} Mar 14 09:14:01 crc kubenswrapper[4687]: I0314 09:14:01.202635 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:01 crc kubenswrapper[4687]: I0314 09:14:01.203069 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:01 crc kubenswrapper[4687]: I0314 09:14:01.251432 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:01 crc kubenswrapper[4687]: I0314 09:14:01.951685 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:02 crc kubenswrapper[4687]: I0314 09:14:02.007055 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:14:02 crc kubenswrapper[4687]: I0314 09:14:02.906907 4687 generic.go:334] "Generic (PLEG): container finished" podID="c48f6277-8485-4374-858f-43ddb712771a" containerID="46bd9be0aa50b97cf9cdc8260301084d866ddce692d5fdcc6a89b1cc7904998c" exitCode=0 Mar 14 09:14:02 crc kubenswrapper[4687]: I0314 09:14:02.906970 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-dnlms" event={"ID":"c48f6277-8485-4374-858f-43ddb712771a","Type":"ContainerDied","Data":"46bd9be0aa50b97cf9cdc8260301084d866ddce692d5fdcc6a89b1cc7904998c"} Mar 14 09:14:03 crc kubenswrapper[4687]: I0314 09:14:03.915510 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gw4k6" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="registry-server" containerID="cri-o://88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893" gracePeriod=2 Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.314435 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.371374 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.375165 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r98rw\" (UniqueName: \"kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw\") pod \"c48f6277-8485-4374-858f-43ddb712771a\" (UID: \"c48f6277-8485-4374-858f-43ddb712771a\") " Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.384656 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw" (OuterVolumeSpecName: "kube-api-access-r98rw") pod "c48f6277-8485-4374-858f-43ddb712771a" (UID: "c48f6277-8485-4374-858f-43ddb712771a"). InnerVolumeSpecName "kube-api-access-r98rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.401921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-hdk5z" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.476108 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content\") pod \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.476252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities\") pod \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.476420 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498mt\" (UniqueName: \"kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt\") pod \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\" (UID: \"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e\") " Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.477382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities" (OuterVolumeSpecName: "utilities") pod "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" (UID: "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.477761 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r98rw\" (UniqueName: \"kubernetes.io/projected/c48f6277-8485-4374-858f-43ddb712771a-kube-api-access-r98rw\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.477791 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.480342 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt" (OuterVolumeSpecName: "kube-api-access-498mt") pod "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" (UID: "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e"). InnerVolumeSpecName "kube-api-access-498mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.528628 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" (UID: "40e5c1b7-e663-4e49-9dcc-ce0f80d9956e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.578791 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498mt\" (UniqueName: \"kubernetes.io/projected/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-kube-api-access-498mt\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.578826 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.896170 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-dw5bx" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.924043 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-dnlms" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.924092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-dnlms" event={"ID":"c48f6277-8485-4374-858f-43ddb712771a","Type":"ContainerDied","Data":"7944e5d2dd862b1e7ca882836670431fd84828664d13c4e0c80000137b2b83f1"} Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.924133 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7944e5d2dd862b1e7ca882836670431fd84828664d13c4e0c80000137b2b83f1" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.926911 4687 generic.go:334] "Generic (PLEG): container finished" podID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerID="88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893" exitCode=0 Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.926940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerDied","Data":"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893"} Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.926958 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw4k6" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.926978 4687 scope.go:117] "RemoveContainer" containerID="88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893" Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.926964 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw4k6" event={"ID":"40e5c1b7-e663-4e49-9dcc-ce0f80d9956e","Type":"ContainerDied","Data":"1e43b1e9e75d6361d74f53b088eb3e0942aa95910544d789d784592830c956d5"} Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.977169 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:14:04 crc kubenswrapper[4687]: I0314 09:14:04.982114 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gw4k6"] Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.367975 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-hzkcr"] Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.372939 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-hzkcr"] Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.527814 4687 scope.go:117] "RemoveContainer" containerID="6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.543015 4687 scope.go:117] "RemoveContainer" containerID="2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.568107 4687 scope.go:117] "RemoveContainer" containerID="88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893" Mar 14 09:14:05 crc kubenswrapper[4687]: E0314 09:14:05.568590 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893\": container with ID starting with 88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893 not found: ID does not exist" containerID="88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.568627 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893"} err="failed to get container status \"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893\": rpc error: code = NotFound desc = could not find container \"88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893\": container with ID starting with 88d99c3a58378112d3ba54ecdbaf771b19c0e4832be0d65864cf81161a9e1893 not found: ID does not exist" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.568654 4687 scope.go:117] "RemoveContainer" containerID="6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057" Mar 14 09:14:05 crc kubenswrapper[4687]: E0314 09:14:05.569109 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057\": container with ID starting with 6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057 not found: ID does not exist" containerID="6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.569138 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057"} err="failed to get container status \"6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057\": rpc error: code = NotFound desc = could not find container \"6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057\": container with ID starting with 6e6be055939c41cf78a84b37d22985a52635ce0591cc582706254aea52a1c057 not found: ID does not exist" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.569156 4687 scope.go:117] "RemoveContainer" containerID="2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a" Mar 14 09:14:05 crc kubenswrapper[4687]: E0314 09:14:05.569646 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a\": container with ID starting with 2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a not found: ID does not exist" containerID="2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.569677 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a"} err="failed to get container status \"2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a\": rpc error: code = NotFound desc = could not find container \"2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a\": container with ID starting with 2437183982ae2968ffb4e1b043df71cba0648a88b5a256302d37f5492f3d605a not found: ID does not exist" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.744793 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" path="/var/lib/kubelet/pods/40e5c1b7-e663-4e49-9dcc-ce0f80d9956e/volumes" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.745377 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452b1e5d-996c-485f-adb0-06fd7f1d38a4" path="/var/lib/kubelet/pods/452b1e5d-996c-485f-adb0-06fd7f1d38a4/volumes" Mar 14 09:14:05 crc kubenswrapper[4687]: I0314 09:14:05.864566 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6qhvs" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635069 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:08 crc kubenswrapper[4687]: E0314 09:14:08.635567 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="extract-content" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635579 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="extract-content" Mar 14 09:14:08 crc kubenswrapper[4687]: E0314 09:14:08.635596 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="extract-utilities" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635602 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="extract-utilities" Mar 14 09:14:08 crc kubenswrapper[4687]: E0314 09:14:08.635612 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="registry-server" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635618 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="registry-server" Mar 14 09:14:08 crc kubenswrapper[4687]: E0314 09:14:08.635633 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48f6277-8485-4374-858f-43ddb712771a" containerName="oc" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635639 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48f6277-8485-4374-858f-43ddb712771a" containerName="oc" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635741 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48f6277-8485-4374-858f-43ddb712771a" containerName="oc" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.635756 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e5c1b7-e663-4e49-9dcc-ce0f80d9956e" containerName="registry-server" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.636200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.639235 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.640041 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.642739 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fzk8d" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.659643 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.729812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrk8\" (UniqueName: \"kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8\") pod \"openstack-operator-index-q7rpv\" (UID: \"009ba9b6-d74c-4efc-a3d1-7d91db91796f\") " pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.830704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgrk8\" (UniqueName: \"kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8\") pod \"openstack-operator-index-q7rpv\" (UID: \"009ba9b6-d74c-4efc-a3d1-7d91db91796f\") " pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.848535 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgrk8\" (UniqueName: \"kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8\") pod \"openstack-operator-index-q7rpv\" (UID: \"009ba9b6-d74c-4efc-a3d1-7d91db91796f\") " pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:08 crc kubenswrapper[4687]: I0314 09:14:08.955315 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:09 crc kubenswrapper[4687]: I0314 09:14:09.224447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:09 crc kubenswrapper[4687]: W0314 09:14:09.238272 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009ba9b6_d74c_4efc_a3d1_7d91db91796f.slice/crio-9e4aba17dbba415627f546ca3f20a7ed670e29e599fd4c6bc791da8be67fad22 WatchSource:0}: Error finding container 9e4aba17dbba415627f546ca3f20a7ed670e29e599fd4c6bc791da8be67fad22: Status 404 returned error can't find the container with id 9e4aba17dbba415627f546ca3f20a7ed670e29e599fd4c6bc791da8be67fad22 Mar 14 09:14:09 crc kubenswrapper[4687]: I0314 09:14:09.979660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7rpv" event={"ID":"009ba9b6-d74c-4efc-a3d1-7d91db91796f","Type":"ContainerStarted","Data":"9e4aba17dbba415627f546ca3f20a7ed670e29e599fd4c6bc791da8be67fad22"} Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.007056 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.615865 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-762kl"] Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.616983 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.630348 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-762kl"] Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.702739 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbgk\" (UniqueName: \"kubernetes.io/projected/11a5a905-c530-43e0-87db-4437b61ed3da-kube-api-access-nmbgk\") pod \"openstack-operator-index-762kl\" (UID: \"11a5a905-c530-43e0-87db-4437b61ed3da\") " pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.804100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbgk\" (UniqueName: \"kubernetes.io/projected/11a5a905-c530-43e0-87db-4437b61ed3da-kube-api-access-nmbgk\") pod \"openstack-operator-index-762kl\" (UID: \"11a5a905-c530-43e0-87db-4437b61ed3da\") " pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.830716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbgk\" (UniqueName: \"kubernetes.io/projected/11a5a905-c530-43e0-87db-4437b61ed3da-kube-api-access-nmbgk\") pod \"openstack-operator-index-762kl\" (UID: \"11a5a905-c530-43e0-87db-4437b61ed3da\") " pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.933441 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.999649 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7rpv" event={"ID":"009ba9b6-d74c-4efc-a3d1-7d91db91796f","Type":"ContainerStarted","Data":"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324"} Mar 14 09:14:12 crc kubenswrapper[4687]: I0314 09:14:12.999769 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-q7rpv" podUID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" containerName="registry-server" containerID="cri-o://13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324" gracePeriod=2 Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.019863 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q7rpv" podStartSLOduration=1.909293278 podStartE2EDuration="5.019816406s" podCreationTimestamp="2026-03-14 09:14:08 +0000 UTC" firstStartedPulling="2026-03-14 09:14:09.240325268 +0000 UTC m=+1034.228565643" lastFinishedPulling="2026-03-14 09:14:12.350848366 +0000 UTC m=+1037.339088771" observedRunningTime="2026-03-14 09:14:13.018489843 +0000 UTC m=+1038.006730218" watchObservedRunningTime="2026-03-14 09:14:13.019816406 +0000 UTC m=+1038.008056821" Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.202080 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-762kl"] Mar 14 09:14:13 crc kubenswrapper[4687]: W0314 09:14:13.207783 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11a5a905_c530_43e0_87db_4437b61ed3da.slice/crio-7357d92a6973001423b4f9a8b70f0f0897055b62064bc081f8e1ef6a6f7a8e39 WatchSource:0}: Error finding container 7357d92a6973001423b4f9a8b70f0f0897055b62064bc081f8e1ef6a6f7a8e39: Status 404 returned error can't find the container with id 7357d92a6973001423b4f9a8b70f0f0897055b62064bc081f8e1ef6a6f7a8e39 Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.363579 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.412086 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgrk8\" (UniqueName: \"kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8\") pod \"009ba9b6-d74c-4efc-a3d1-7d91db91796f\" (UID: \"009ba9b6-d74c-4efc-a3d1-7d91db91796f\") " Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.422129 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8" (OuterVolumeSpecName: "kube-api-access-vgrk8") pod "009ba9b6-d74c-4efc-a3d1-7d91db91796f" (UID: "009ba9b6-d74c-4efc-a3d1-7d91db91796f"). InnerVolumeSpecName "kube-api-access-vgrk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:13 crc kubenswrapper[4687]: I0314 09:14:13.513774 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgrk8\" (UniqueName: \"kubernetes.io/projected/009ba9b6-d74c-4efc-a3d1-7d91db91796f-kube-api-access-vgrk8\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.009032 4687 generic.go:334] "Generic (PLEG): container finished" podID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" containerID="13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324" exitCode=0 Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.009081 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7rpv" event={"ID":"009ba9b6-d74c-4efc-a3d1-7d91db91796f","Type":"ContainerDied","Data":"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324"} Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.009138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7rpv" event={"ID":"009ba9b6-d74c-4efc-a3d1-7d91db91796f","Type":"ContainerDied","Data":"9e4aba17dbba415627f546ca3f20a7ed670e29e599fd4c6bc791da8be67fad22"} Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.009160 4687 scope.go:117] "RemoveContainer" containerID="13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.009711 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7rpv" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.011013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-762kl" event={"ID":"11a5a905-c530-43e0-87db-4437b61ed3da","Type":"ContainerStarted","Data":"2702a85820a7d3a07702948f831e9895f3d18709d20c5881cb3c29a8019e4cfb"} Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.011037 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-762kl" event={"ID":"11a5a905-c530-43e0-87db-4437b61ed3da","Type":"ContainerStarted","Data":"7357d92a6973001423b4f9a8b70f0f0897055b62064bc081f8e1ef6a6f7a8e39"} Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.029560 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-762kl" podStartSLOduration=1.9699301839999999 podStartE2EDuration="2.02954072s" podCreationTimestamp="2026-03-14 09:14:12 +0000 UTC" firstStartedPulling="2026-03-14 09:14:13.218646257 +0000 UTC m=+1038.206886642" lastFinishedPulling="2026-03-14 09:14:13.278256803 +0000 UTC m=+1038.266497178" observedRunningTime="2026-03-14 09:14:14.025707205 +0000 UTC m=+1039.013947580" watchObservedRunningTime="2026-03-14 09:14:14.02954072 +0000 UTC m=+1039.017781095" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.041526 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.044066 4687 scope.go:117] "RemoveContainer" containerID="13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324" Mar 14 09:14:14 crc kubenswrapper[4687]: E0314 09:14:14.044468 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324\": container with ID starting with 13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324 not found: ID does not exist" containerID="13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.044502 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324"} err="failed to get container status \"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324\": rpc error: code = NotFound desc = could not find container \"13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324\": container with ID starting with 13def4645468c70538007d136e0676f7487d252b05d34f3421ee0ed590fbe324 not found: ID does not exist" Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.045873 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-q7rpv"] Mar 14 09:14:14 crc kubenswrapper[4687]: I0314 09:14:14.872040 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pg6ns" Mar 14 09:14:15 crc kubenswrapper[4687]: I0314 09:14:15.754672 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" path="/var/lib/kubelet/pods/009ba9b6-d74c-4efc-a3d1-7d91db91796f/volumes" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.222383 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:16 crc kubenswrapper[4687]: E0314 09:14:16.222841 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" containerName="registry-server" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.222853 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" containerName="registry-server" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.222952 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="009ba9b6-d74c-4efc-a3d1-7d91db91796f" containerName="registry-server" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.223746 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.240913 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.384815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.384861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.384890 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlws9\" (UniqueName: \"kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.487765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.487828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.487864 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlws9\" (UniqueName: \"kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.488771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.488949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.516413 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlws9\" (UniqueName: \"kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9\") pod \"community-operators-nmvgv\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:16 crc kubenswrapper[4687]: I0314 09:14:16.587208 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:17 crc kubenswrapper[4687]: I0314 09:14:17.067389 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:17 crc kubenswrapper[4687]: W0314 09:14:17.069992 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870d5b00_0ce2_4967_a49d_5589d090f51f.slice/crio-66de188c4ba7cb159b78eb744f5b31eb25e782e5a0ae165327744e6dd8d214aa WatchSource:0}: Error finding container 66de188c4ba7cb159b78eb744f5b31eb25e782e5a0ae165327744e6dd8d214aa: Status 404 returned error can't find the container with id 66de188c4ba7cb159b78eb744f5b31eb25e782e5a0ae165327744e6dd8d214aa Mar 14 09:14:17 crc kubenswrapper[4687]: E0314 09:14:17.375251 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870d5b00_0ce2_4967_a49d_5589d090f51f.slice/crio-conmon-13ce502332607a1db8ba9f8a78423f9aeb0429b19e138e128fdc888b031e05b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870d5b00_0ce2_4967_a49d_5589d090f51f.slice/crio-13ce502332607a1db8ba9f8a78423f9aeb0429b19e138e128fdc888b031e05b0.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:14:18 crc kubenswrapper[4687]: I0314 09:14:18.038944 4687 generic.go:334] "Generic (PLEG): container finished" podID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerID="13ce502332607a1db8ba9f8a78423f9aeb0429b19e138e128fdc888b031e05b0" exitCode=0 Mar 14 09:14:18 crc kubenswrapper[4687]: I0314 09:14:18.039001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerDied","Data":"13ce502332607a1db8ba9f8a78423f9aeb0429b19e138e128fdc888b031e05b0"} Mar 14 09:14:18 crc kubenswrapper[4687]: I0314 09:14:18.039036 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerStarted","Data":"66de188c4ba7cb159b78eb744f5b31eb25e782e5a0ae165327744e6dd8d214aa"} Mar 14 09:14:20 crc kubenswrapper[4687]: I0314 09:14:20.053782 4687 generic.go:334] "Generic (PLEG): container finished" podID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerID="ea094767f11b0fcec9d319fdaee6524ad2a8c5a03bb21e007bcfb8a6f5e18861" exitCode=0 Mar 14 09:14:20 crc kubenswrapper[4687]: I0314 09:14:20.053879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerDied","Data":"ea094767f11b0fcec9d319fdaee6524ad2a8c5a03bb21e007bcfb8a6f5e18861"} Mar 14 09:14:21 crc kubenswrapper[4687]: I0314 09:14:21.062588 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerStarted","Data":"23d408a765590b3e9829e230b50952233c131c2ade630bfb3a680fe303f207a9"} Mar 14 09:14:21 crc kubenswrapper[4687]: I0314 09:14:21.081563 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmvgv" podStartSLOduration=2.695161984 podStartE2EDuration="5.081541036s" podCreationTimestamp="2026-03-14 09:14:16 +0000 UTC" firstStartedPulling="2026-03-14 09:14:18.040691402 +0000 UTC m=+1043.028931787" lastFinishedPulling="2026-03-14 09:14:20.427070454 +0000 UTC m=+1045.415310839" observedRunningTime="2026-03-14 09:14:21.079507325 +0000 UTC m=+1046.067747710" watchObservedRunningTime="2026-03-14 09:14:21.081541036 +0000 UTC m=+1046.069781421" Mar 14 09:14:22 crc kubenswrapper[4687]: I0314 09:14:22.933957 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:22 crc kubenswrapper[4687]: I0314 09:14:22.934660 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:22 crc kubenswrapper[4687]: I0314 09:14:22.982345 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.023046 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.024266 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.039134 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.129638 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-762kl" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.175036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhkq\" (UniqueName: \"kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.175083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.175276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.276681 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnhkq\" (UniqueName: \"kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.276744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.276785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.277360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.277463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.302264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnhkq\" (UniqueName: \"kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq\") pod \"redhat-marketplace-8kmb9\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.339735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:23 crc kubenswrapper[4687]: I0314 09:14:23.802146 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.080172 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerStarted","Data":"a551466ff082176ed526bd253a2bd020f919ab7ceb57d68b5ff4f6a3d567fb20"} Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.111404 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.111461 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.111506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.112090 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.112149 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b" gracePeriod=600 Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.674599 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552"] Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.676145 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.678451 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mvtsl" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.693521 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552"] Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.796032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.796410 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.796719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxz7\" (UniqueName: \"kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.898306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpxz7\" (UniqueName: \"kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.898425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.898479 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.899017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.899160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.915551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpxz7\" (UniqueName: \"kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7\") pod \"b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:24 crc kubenswrapper[4687]: I0314 09:14:24.996835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.107734 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b" exitCode=0 Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.107978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b"} Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.108004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7"} Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.108071 4687 scope.go:117] "RemoveContainer" containerID="c6e6ab56e9f300f6c0a097e2aeafd8b20c69f2074bcf3e3c8d95b1965702e749" Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.113813 4687 generic.go:334] "Generic (PLEG): container finished" podID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerID="a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5" exitCode=0 Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.113860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerDied","Data":"a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5"} Mar 14 09:14:25 crc kubenswrapper[4687]: I0314 09:14:25.426110 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552"] Mar 14 09:14:25 crc kubenswrapper[4687]: W0314 09:14:25.434757 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083ec994_abcc_49d0_a79d_5b2a54a1ab00.slice/crio-8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b WatchSource:0}: Error finding container 8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b: Status 404 returned error can't find the container with id 8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.121631 4687 generic.go:334] "Generic (PLEG): container finished" podID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerID="c3d28499c4c634390da2343d4c876af14577ace60060aaedf8cd1e9376422007" exitCode=0 Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.121681 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" event={"ID":"083ec994-abcc-49d0-a79d-5b2a54a1ab00","Type":"ContainerDied","Data":"c3d28499c4c634390da2343d4c876af14577ace60060aaedf8cd1e9376422007"} Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.121976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" event={"ID":"083ec994-abcc-49d0-a79d-5b2a54a1ab00","Type":"ContainerStarted","Data":"8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b"} Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.127418 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerStarted","Data":"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510"} Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.587852 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.588158 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:26 crc kubenswrapper[4687]: I0314 09:14:26.663634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:27 crc kubenswrapper[4687]: I0314 09:14:27.136314 4687 generic.go:334] "Generic (PLEG): container finished" podID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerID="d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510" exitCode=0 Mar 14 09:14:27 crc kubenswrapper[4687]: I0314 09:14:27.136444 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerDied","Data":"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510"} Mar 14 09:14:27 crc kubenswrapper[4687]: I0314 09:14:27.153746 4687 generic.go:334] "Generic (PLEG): container finished" podID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerID="bb13b6e95148c6a5432bfaacbd97afd81e45f03a99f15e0a205b07a236f2a34d" exitCode=0 Mar 14 09:14:27 crc kubenswrapper[4687]: I0314 09:14:27.154077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" event={"ID":"083ec994-abcc-49d0-a79d-5b2a54a1ab00","Type":"ContainerDied","Data":"bb13b6e95148c6a5432bfaacbd97afd81e45f03a99f15e0a205b07a236f2a34d"} Mar 14 09:14:27 crc kubenswrapper[4687]: I0314 09:14:27.212694 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:27 crc kubenswrapper[4687]: E0314 09:14:27.525009 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083ec994_abcc_49d0_a79d_5b2a54a1ab00.slice/crio-conmon-bf421fd2d08cdf85f22db64880ef6b7d7ed638dee041670ed460a5971fbc2340.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083ec994_abcc_49d0_a79d_5b2a54a1ab00.slice/crio-bf421fd2d08cdf85f22db64880ef6b7d7ed638dee041670ed460a5971fbc2340.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:14:28 crc kubenswrapper[4687]: I0314 09:14:28.163391 4687 generic.go:334] "Generic (PLEG): container finished" podID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerID="bf421fd2d08cdf85f22db64880ef6b7d7ed638dee041670ed460a5971fbc2340" exitCode=0 Mar 14 09:14:28 crc kubenswrapper[4687]: I0314 09:14:28.163483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" event={"ID":"083ec994-abcc-49d0-a79d-5b2a54a1ab00","Type":"ContainerDied","Data":"bf421fd2d08cdf85f22db64880ef6b7d7ed638dee041670ed460a5971fbc2340"} Mar 14 09:14:28 crc kubenswrapper[4687]: I0314 09:14:28.166594 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerStarted","Data":"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29"} Mar 14 09:14:28 crc kubenswrapper[4687]: I0314 09:14:28.207044 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8kmb9" podStartSLOduration=2.807643344 podStartE2EDuration="5.207026537s" podCreationTimestamp="2026-03-14 09:14:23 +0000 UTC" firstStartedPulling="2026-03-14 09:14:25.11511403 +0000 UTC m=+1050.103354405" lastFinishedPulling="2026-03-14 09:14:27.514497223 +0000 UTC m=+1052.502737598" observedRunningTime="2026-03-14 09:14:28.20477737 +0000 UTC m=+1053.193017775" watchObservedRunningTime="2026-03-14 09:14:28.207026537 +0000 UTC m=+1053.195266912" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.504002 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.674783 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle\") pod \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.674839 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util\") pod \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.674872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpxz7\" (UniqueName: \"kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7\") pod \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\" (UID: \"083ec994-abcc-49d0-a79d-5b2a54a1ab00\") " Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.675533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle" (OuterVolumeSpecName: "bundle") pod "083ec994-abcc-49d0-a79d-5b2a54a1ab00" (UID: "083ec994-abcc-49d0-a79d-5b2a54a1ab00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.680069 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7" (OuterVolumeSpecName: "kube-api-access-lpxz7") pod "083ec994-abcc-49d0-a79d-5b2a54a1ab00" (UID: "083ec994-abcc-49d0-a79d-5b2a54a1ab00"). InnerVolumeSpecName "kube-api-access-lpxz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.690871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util" (OuterVolumeSpecName: "util") pod "083ec994-abcc-49d0-a79d-5b2a54a1ab00" (UID: "083ec994-abcc-49d0-a79d-5b2a54a1ab00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.776966 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.777001 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/083ec994-abcc-49d0-a79d-5b2a54a1ab00-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:29 crc kubenswrapper[4687]: I0314 09:14:29.777012 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpxz7\" (UniqueName: \"kubernetes.io/projected/083ec994-abcc-49d0-a79d-5b2a54a1ab00-kube-api-access-lpxz7\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:30 crc kubenswrapper[4687]: I0314 09:14:30.181224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" event={"ID":"083ec994-abcc-49d0-a79d-5b2a54a1ab00","Type":"ContainerDied","Data":"8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b"} Mar 14 09:14:30 crc kubenswrapper[4687]: I0314 09:14:30.181282 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8352a1ac66ff20abbe8cd383fda19ba1a8e4e54fad6887f318550c2722d5630b" Mar 14 09:14:30 crc kubenswrapper[4687]: I0314 09:14:30.181344 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.572632 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd"] Mar 14 09:14:32 crc kubenswrapper[4687]: E0314 09:14:32.573058 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="pull" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.573069 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="pull" Mar 14 09:14:32 crc kubenswrapper[4687]: E0314 09:14:32.573083 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="util" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.573089 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="util" Mar 14 09:14:32 crc kubenswrapper[4687]: E0314 09:14:32.573104 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="extract" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.573110 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="extract" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.573213 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="083ec994-abcc-49d0-a79d-5b2a54a1ab00" containerName="extract" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.573615 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.575368 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qbwvp" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.601420 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd"] Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.711930 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm62v\" (UniqueName: \"kubernetes.io/projected/ac2cf39c-8fde-45de-858d-c8c9a8a572a1-kube-api-access-nm62v\") pod \"openstack-operator-controller-init-6bdb46c895-chnbd\" (UID: \"ac2cf39c-8fde-45de-858d-c8c9a8a572a1\") " pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.807536 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.807813 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmvgv" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="registry-server" containerID="cri-o://23d408a765590b3e9829e230b50952233c131c2ade630bfb3a680fe303f207a9" gracePeriod=2 Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.813419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm62v\" (UniqueName: \"kubernetes.io/projected/ac2cf39c-8fde-45de-858d-c8c9a8a572a1-kube-api-access-nm62v\") pod \"openstack-operator-controller-init-6bdb46c895-chnbd\" (UID: \"ac2cf39c-8fde-45de-858d-c8c9a8a572a1\") " pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.831312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm62v\" (UniqueName: \"kubernetes.io/projected/ac2cf39c-8fde-45de-858d-c8c9a8a572a1-kube-api-access-nm62v\") pod \"openstack-operator-controller-init-6bdb46c895-chnbd\" (UID: \"ac2cf39c-8fde-45de-858d-c8c9a8a572a1\") " pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:32 crc kubenswrapper[4687]: I0314 09:14:32.890777 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.212172 4687 generic.go:334] "Generic (PLEG): container finished" podID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerID="23d408a765590b3e9829e230b50952233c131c2ade630bfb3a680fe303f207a9" exitCode=0 Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.212257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerDied","Data":"23d408a765590b3e9829e230b50952233c131c2ade630bfb3a680fe303f207a9"} Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.299633 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.340306 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.340378 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.367693 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd"] Mar 14 09:14:33 crc kubenswrapper[4687]: W0314 09:14:33.379202 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac2cf39c_8fde_45de_858d_c8c9a8a572a1.slice/crio-bbdef481589072e2a2361f2f74b764d91a9d7ded356af28a725959e557622381 WatchSource:0}: Error finding container bbdef481589072e2a2361f2f74b764d91a9d7ded356af28a725959e557622381: Status 404 returned error can't find the container with id bbdef481589072e2a2361f2f74b764d91a9d7ded356af28a725959e557622381 Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.386915 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.423886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content\") pod \"870d5b00-0ce2-4967-a49d-5589d090f51f\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.423972 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlws9\" (UniqueName: \"kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9\") pod \"870d5b00-0ce2-4967-a49d-5589d090f51f\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.424002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities\") pod \"870d5b00-0ce2-4967-a49d-5589d090f51f\" (UID: \"870d5b00-0ce2-4967-a49d-5589d090f51f\") " Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.425183 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities" (OuterVolumeSpecName: "utilities") pod "870d5b00-0ce2-4967-a49d-5589d090f51f" (UID: "870d5b00-0ce2-4967-a49d-5589d090f51f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.431112 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9" (OuterVolumeSpecName: "kube-api-access-xlws9") pod "870d5b00-0ce2-4967-a49d-5589d090f51f" (UID: "870d5b00-0ce2-4967-a49d-5589d090f51f"). InnerVolumeSpecName "kube-api-access-xlws9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.495145 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "870d5b00-0ce2-4967-a49d-5589d090f51f" (UID: "870d5b00-0ce2-4967-a49d-5589d090f51f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.525639 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.525695 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlws9\" (UniqueName: \"kubernetes.io/projected/870d5b00-0ce2-4967-a49d-5589d090f51f-kube-api-access-xlws9\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:33 crc kubenswrapper[4687]: I0314 09:14:33.525712 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870d5b00-0ce2-4967-a49d-5589d090f51f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.224869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmvgv" event={"ID":"870d5b00-0ce2-4967-a49d-5589d090f51f","Type":"ContainerDied","Data":"66de188c4ba7cb159b78eb744f5b31eb25e782e5a0ae165327744e6dd8d214aa"} Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.224931 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmvgv" Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.225260 4687 scope.go:117] "RemoveContainer" containerID="23d408a765590b3e9829e230b50952233c131c2ade630bfb3a680fe303f207a9" Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.226777 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" event={"ID":"ac2cf39c-8fde-45de-858d-c8c9a8a572a1","Type":"ContainerStarted","Data":"bbdef481589072e2a2361f2f74b764d91a9d7ded356af28a725959e557622381"} Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.250570 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.255725 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmvgv"] Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.262668 4687 scope.go:117] "RemoveContainer" containerID="ea094767f11b0fcec9d319fdaee6524ad2a8c5a03bb21e007bcfb8a6f5e18861" Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.310254 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:34 crc kubenswrapper[4687]: I0314 09:14:34.325351 4687 scope.go:117] "RemoveContainer" containerID="13ce502332607a1db8ba9f8a78423f9aeb0429b19e138e128fdc888b031e05b0" Mar 14 09:14:35 crc kubenswrapper[4687]: I0314 09:14:35.749866 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" path="/var/lib/kubelet/pods/870d5b00-0ce2-4967-a49d-5589d090f51f/volumes" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.409061 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.409787 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8kmb9" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="registry-server" containerID="cri-o://d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29" gracePeriod=2 Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.787564 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.892817 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnhkq\" (UniqueName: \"kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq\") pod \"a0e24b98-50af-455d-a56d-360d2eeb3a43\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.892893 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities\") pod \"a0e24b98-50af-455d-a56d-360d2eeb3a43\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.892978 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content\") pod \"a0e24b98-50af-455d-a56d-360d2eeb3a43\" (UID: \"a0e24b98-50af-455d-a56d-360d2eeb3a43\") " Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.900071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities" (OuterVolumeSpecName: "utilities") pod "a0e24b98-50af-455d-a56d-360d2eeb3a43" (UID: "a0e24b98-50af-455d-a56d-360d2eeb3a43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.908569 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq" (OuterVolumeSpecName: "kube-api-access-jnhkq") pod "a0e24b98-50af-455d-a56d-360d2eeb3a43" (UID: "a0e24b98-50af-455d-a56d-360d2eeb3a43"). InnerVolumeSpecName "kube-api-access-jnhkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.935816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0e24b98-50af-455d-a56d-360d2eeb3a43" (UID: "a0e24b98-50af-455d-a56d-360d2eeb3a43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.994721 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnhkq\" (UniqueName: \"kubernetes.io/projected/a0e24b98-50af-455d-a56d-360d2eeb3a43-kube-api-access-jnhkq\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.994766 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:37 crc kubenswrapper[4687]: I0314 09:14:37.994778 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e24b98-50af-455d-a56d-360d2eeb3a43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.259584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" event={"ID":"ac2cf39c-8fde-45de-858d-c8c9a8a572a1","Type":"ContainerStarted","Data":"9db8390b35c7da0d5646eb22a1a96969254d1818cdec32ddc417ea72bfdb7000"} Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.259827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.263862 4687 generic.go:334] "Generic (PLEG): container finished" podID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerID="d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29" exitCode=0 Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.263959 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kmb9" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.263944 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerDied","Data":"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29"} Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.264126 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kmb9" event={"ID":"a0e24b98-50af-455d-a56d-360d2eeb3a43","Type":"ContainerDied","Data":"a551466ff082176ed526bd253a2bd020f919ab7ceb57d68b5ff4f6a3d567fb20"} Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.264165 4687 scope.go:117] "RemoveContainer" containerID="d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.287471 4687 scope.go:117] "RemoveContainer" containerID="d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.310066 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" podStartSLOduration=2.50476589 podStartE2EDuration="6.310047937s" podCreationTimestamp="2026-03-14 09:14:32 +0000 UTC" firstStartedPulling="2026-03-14 09:14:33.381232008 +0000 UTC m=+1058.369472383" lastFinishedPulling="2026-03-14 09:14:37.186514055 +0000 UTC m=+1062.174754430" observedRunningTime="2026-03-14 09:14:38.299798483 +0000 UTC m=+1063.288038898" watchObservedRunningTime="2026-03-14 09:14:38.310047937 +0000 UTC m=+1063.298288322" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.311122 4687 scope.go:117] "RemoveContainer" containerID="a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.325036 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.330321 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kmb9"] Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.331748 4687 scope.go:117] "RemoveContainer" containerID="d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29" Mar 14 09:14:38 crc kubenswrapper[4687]: E0314 09:14:38.341842 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29\": container with ID starting with d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29 not found: ID does not exist" containerID="d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.341906 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29"} err="failed to get container status \"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29\": rpc error: code = NotFound desc = could not find container \"d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29\": container with ID starting with d88a7d543bb8f6d7ea356a92be2c5ad549e865ff64dde626d8ec62ca8a442f29 not found: ID does not exist" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.341943 4687 scope.go:117] "RemoveContainer" containerID="d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510" Mar 14 09:14:38 crc kubenswrapper[4687]: E0314 09:14:38.342529 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510\": container with ID starting with d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510 not found: ID does not exist" containerID="d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.342651 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510"} err="failed to get container status \"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510\": rpc error: code = NotFound desc = could not find container \"d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510\": container with ID starting with d0a24ca3fdc4acc171ef2272fb8aac5bea664291da6f02e0e4c668f89cc99510 not found: ID does not exist" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.342712 4687 scope.go:117] "RemoveContainer" containerID="a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5" Mar 14 09:14:38 crc kubenswrapper[4687]: E0314 09:14:38.343102 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5\": container with ID starting with a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5 not found: ID does not exist" containerID="a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5" Mar 14 09:14:38 crc kubenswrapper[4687]: I0314 09:14:38.343145 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5"} err="failed to get container status \"a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5\": rpc error: code = NotFound desc = could not find container \"a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5\": container with ID starting with a70303d024b632e9bd966d0f70397950ee5905b7ccd744000df9598370d920f5 not found: ID does not exist" Mar 14 09:14:39 crc kubenswrapper[4687]: I0314 09:14:39.747079 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" path="/var/lib/kubelet/pods/a0e24b98-50af-455d-a56d-360d2eeb3a43/volumes" Mar 14 09:14:42 crc kubenswrapper[4687]: I0314 09:14:42.893210 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bdb46c895-chnbd" Mar 14 09:14:58 crc kubenswrapper[4687]: I0314 09:14:58.083801 4687 scope.go:117] "RemoveContainer" containerID="31fa318c1fe23d4b5173f74856449bdb9d27216fa59dd85b1640dc9fe3ea41d8" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131087 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2"] Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131648 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131663 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131673 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131679 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131687 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131695 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131705 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131711 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131722 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131728 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4687]: E0314 09:15:00.131744 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131749 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131857 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="870d5b00-0ce2-4967-a49d-5589d090f51f" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.131866 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e24b98-50af-455d-a56d-360d2eeb3a43" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.132379 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.134555 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.141835 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2"] Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.142857 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.194252 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.194630 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2m6\" (UniqueName: \"kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.194700 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.295490 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.295544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c2m6\" (UniqueName: \"kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.295593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.296507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.303722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.311558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c2m6\" (UniqueName: \"kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6\") pod \"collect-profiles-29557995-zlrp2\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.460963 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:00 crc kubenswrapper[4687]: I0314 09:15:00.919423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2"] Mar 14 09:15:01 crc kubenswrapper[4687]: I0314 09:15:01.419509 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" containerID="0119f18a331e35bbc996e6ea7538c456f99ed36a37613ee3d8e2dccbbeba6946" exitCode=0 Mar 14 09:15:01 crc kubenswrapper[4687]: I0314 09:15:01.419846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" event={"ID":"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a","Type":"ContainerDied","Data":"0119f18a331e35bbc996e6ea7538c456f99ed36a37613ee3d8e2dccbbeba6946"} Mar 14 09:15:01 crc kubenswrapper[4687]: I0314 09:15:01.420883 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" event={"ID":"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a","Type":"ContainerStarted","Data":"e8ac67ea7376d138bb0219c3db44a7d996888a84d267cf587da94309d6bfe5ac"} Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.763892 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.938794 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c2m6\" (UniqueName: \"kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6\") pod \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.939158 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume\") pod \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.939237 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume\") pod \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\" (UID: \"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a\") " Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.939709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" (UID: "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.943593 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" (UID: "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:15:02 crc kubenswrapper[4687]: I0314 09:15:02.943745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6" (OuterVolumeSpecName: "kube-api-access-5c2m6") pod "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" (UID: "4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a"). InnerVolumeSpecName "kube-api-access-5c2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.039910 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c2m6\" (UniqueName: \"kubernetes.io/projected/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-kube-api-access-5c2m6\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.039938 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.039949 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.432373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" event={"ID":"4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a","Type":"ContainerDied","Data":"e8ac67ea7376d138bb0219c3db44a7d996888a84d267cf587da94309d6bfe5ac"} Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.432409 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ac67ea7376d138bb0219c3db44a7d996888a84d267cf587da94309d6bfe5ac" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.432452 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.689830 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr"] Mar 14 09:15:03 crc kubenswrapper[4687]: E0314 09:15:03.690121 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" containerName="collect-profiles" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.690139 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" containerName="collect-profiles" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.690323 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" containerName="collect-profiles" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.690893 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.695458 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fz24h" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.700161 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.706148 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.706995 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.712179 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.713403 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.722211 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n822l" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.727256 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8qfkg" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.730442 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.751139 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkk5q\" (UniqueName: \"kubernetes.io/projected/e76c5b65-d7d1-4986-90fe-dab7724bc142-kube-api-access-kkk5q\") pod \"cinder-operator-controller-manager-984cd4dcf-wghgn\" (UID: \"e76c5b65-d7d1-4986-90fe-dab7724bc142\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.751221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhjxg\" (UniqueName: \"kubernetes.io/projected/56b6df44-ea65-46b4-93da-67d70a3769b1-kube-api-access-vhjxg\") pod \"designate-operator-controller-manager-66d56f6ff4-bb2q4\" (UID: \"56b6df44-ea65-46b4-93da-67d70a3769b1\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.751255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrc9\" (UniqueName: \"kubernetes.io/projected/52b416da-f3c4-43f1-a91a-14dac5c1cf25-kube-api-access-sjrc9\") pod \"barbican-operator-controller-manager-d47688694-v6wjr\" (UID: \"52b416da-f3c4-43f1-a91a-14dac5c1cf25\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.760945 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.761014 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.763256 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.765425 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c99ft" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.782221 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.786823 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-426nr"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.788076 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.789870 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sc2n9" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.822866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-426nr"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.843843 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.847301 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861759 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkk5q\" (UniqueName: \"kubernetes.io/projected/e76c5b65-d7d1-4986-90fe-dab7724bc142-kube-api-access-kkk5q\") pod \"cinder-operator-controller-manager-984cd4dcf-wghgn\" (UID: \"e76c5b65-d7d1-4986-90fe-dab7724bc142\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hjc\" (UniqueName: \"kubernetes.io/projected/6de94ed3-f1ea-4cb3-88a7-c78a0af38830-kube-api-access-h4hjc\") pod \"glance-operator-controller-manager-5964f64c48-rblcv\" (UID: \"6de94ed3-f1ea-4cb3-88a7-c78a0af38830\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861860 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb6r\" (UniqueName: \"kubernetes.io/projected/bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9-kube-api-access-xrb6r\") pod \"heat-operator-controller-manager-77b6666d85-426nr\" (UID: \"bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhjxg\" (UniqueName: \"kubernetes.io/projected/56b6df44-ea65-46b4-93da-67d70a3769b1-kube-api-access-vhjxg\") pod \"designate-operator-controller-manager-66d56f6ff4-bb2q4\" (UID: \"56b6df44-ea65-46b4-93da-67d70a3769b1\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrc9\" (UniqueName: \"kubernetes.io/projected/52b416da-f3c4-43f1-a91a-14dac5c1cf25-kube-api-access-sjrc9\") pod \"barbican-operator-controller-manager-d47688694-v6wjr\" (UID: \"52b416da-f3c4-43f1-a91a-14dac5c1cf25\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.861929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqn4\" (UniqueName: \"kubernetes.io/projected/40a7f8a7-9a5c-4607-a685-747c0fc779b5-kube-api-access-zsqn4\") pod \"horizon-operator-controller-manager-6d9d6b584d-pttvx\" (UID: \"40a7f8a7-9a5c-4607-a685-747c0fc779b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.873713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.873834 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-q9dr9" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.880096 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.881050 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.883535 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.884008 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ccdjn" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.894749 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.896963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrc9\" (UniqueName: \"kubernetes.io/projected/52b416da-f3c4-43f1-a91a-14dac5c1cf25-kube-api-access-sjrc9\") pod \"barbican-operator-controller-manager-d47688694-v6wjr\" (UID: \"52b416da-f3c4-43f1-a91a-14dac5c1cf25\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.902421 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.903450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.904427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkk5q\" (UniqueName: \"kubernetes.io/projected/e76c5b65-d7d1-4986-90fe-dab7724bc142-kube-api-access-kkk5q\") pod \"cinder-operator-controller-manager-984cd4dcf-wghgn\" (UID: \"e76c5b65-d7d1-4986-90fe-dab7724bc142\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.905144 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qnkvr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.913373 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.915467 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.916347 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.919975 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-c2q8z" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.920300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhjxg\" (UniqueName: \"kubernetes.io/projected/56b6df44-ea65-46b4-93da-67d70a3769b1-kube-api-access-vhjxg\") pod \"designate-operator-controller-manager-66d56f6ff4-bb2q4\" (UID: \"56b6df44-ea65-46b4-93da-67d70a3769b1\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.932089 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.941374 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.942212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.946817 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kp4p2" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.951235 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.952147 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.956410 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.959786 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cfb2q" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.963228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7hf\" (UniqueName: \"kubernetes.io/projected/18179c6b-b84c-4bd5-b077-fc8c8689e12f-kube-api-access-6w7hf\") pod \"manila-operator-controller-manager-57b484b4df-nbctg\" (UID: \"18179c6b-b84c-4bd5-b077-fc8c8689e12f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.963286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nqzl\" (UniqueName: \"kubernetes.io/projected/59360bb4-cf7e-41a7-b78e-0615b4cd15e4-kube-api-access-6nqzl\") pod \"keystone-operator-controller-manager-684f77d66d-cqznw\" (UID: \"59360bb4-cf7e-41a7-b78e-0615b4cd15e4\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.963317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hjc\" (UniqueName: \"kubernetes.io/projected/6de94ed3-f1ea-4cb3-88a7-c78a0af38830-kube-api-access-h4hjc\") pod \"glance-operator-controller-manager-5964f64c48-rblcv\" (UID: \"6de94ed3-f1ea-4cb3-88a7-c78a0af38830\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.963353 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrb6r\" (UniqueName: \"kubernetes.io/projected/bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9-kube-api-access-xrb6r\") pod \"heat-operator-controller-manager-77b6666d85-426nr\" (UID: \"bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.963380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4c9f\" (UniqueName: \"kubernetes.io/projected/481cbeed-e6fd-4afe-a6af-043a6a06a521-kube-api-access-k4c9f\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2\" (UID: \"481cbeed-e6fd-4afe-a6af-043a6a06a521\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.964094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqn4\" (UniqueName: \"kubernetes.io/projected/40a7f8a7-9a5c-4607-a685-747c0fc779b5-kube-api-access-zsqn4\") pod \"horizon-operator-controller-manager-6d9d6b584d-pttvx\" (UID: \"40a7f8a7-9a5c-4607-a685-747c0fc779b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.966817 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.982652 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m"] Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.984030 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.985304 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hjc\" (UniqueName: \"kubernetes.io/projected/6de94ed3-f1ea-4cb3-88a7-c78a0af38830-kube-api-access-h4hjc\") pod \"glance-operator-controller-manager-5964f64c48-rblcv\" (UID: \"6de94ed3-f1ea-4cb3-88a7-c78a0af38830\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.987115 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r2x57" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.989346 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqn4\" (UniqueName: \"kubernetes.io/projected/40a7f8a7-9a5c-4607-a685-747c0fc779b5-kube-api-access-zsqn4\") pod \"horizon-operator-controller-manager-6d9d6b584d-pttvx\" (UID: \"40a7f8a7-9a5c-4607-a685-747c0fc779b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:03 crc kubenswrapper[4687]: I0314 09:15:03.999083 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-htnfl"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:03.999912 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.001454 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8fr5r" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.005865 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.006899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrb6r\" (UniqueName: \"kubernetes.io/projected/bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9-kube-api-access-xrb6r\") pod \"heat-operator-controller-manager-77b6666d85-426nr\" (UID: \"bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.011426 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-htnfl"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.011717 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.019806 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.020782 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.023777 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xkgs7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.030489 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.033679 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.044950 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.045750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.050048 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.050717 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.050960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.055409 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hn82k" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.055667 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.058781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-flz78" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.059242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.065468 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtx4n\" (UniqueName: \"kubernetes.io/projected/9a2eadbd-233b-49a0-b869-de204e01663c-kube-api-access-gtx4n\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.065995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7hf\" (UniqueName: \"kubernetes.io/projected/18179c6b-b84c-4bd5-b077-fc8c8689e12f-kube-api-access-6w7hf\") pod \"manila-operator-controller-manager-57b484b4df-nbctg\" (UID: \"18179c6b-b84c-4bd5-b077-fc8c8689e12f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.066025 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nqzl\" (UniqueName: \"kubernetes.io/projected/59360bb4-cf7e-41a7-b78e-0615b4cd15e4-kube-api-access-6nqzl\") pod \"keystone-operator-controller-manager-684f77d66d-cqznw\" (UID: \"59360bb4-cf7e-41a7-b78e-0615b4cd15e4\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.066058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4c9f\" (UniqueName: \"kubernetes.io/projected/481cbeed-e6fd-4afe-a6af-043a6a06a521-kube-api-access-k4c9f\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2\" (UID: \"481cbeed-e6fd-4afe-a6af-043a6a06a521\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.066113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fhw\" (UniqueName: \"kubernetes.io/projected/44cc6d95-7c97-4e12-a369-4595f9a540cd-kube-api-access-n8fhw\") pod \"ironic-operator-controller-manager-5bc894d9b-s6578\" (UID: \"44cc6d95-7c97-4e12-a369-4595f9a540cd\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.066138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.086307 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.087131 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.087627 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.088622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nqzl\" (UniqueName: \"kubernetes.io/projected/59360bb4-cf7e-41a7-b78e-0615b4cd15e4-kube-api-access-6nqzl\") pod \"keystone-operator-controller-manager-684f77d66d-cqznw\" (UID: \"59360bb4-cf7e-41a7-b78e-0615b4cd15e4\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.095768 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7hf\" (UniqueName: \"kubernetes.io/projected/18179c6b-b84c-4bd5-b077-fc8c8689e12f-kube-api-access-6w7hf\") pod \"manila-operator-controller-manager-57b484b4df-nbctg\" (UID: \"18179c6b-b84c-4bd5-b077-fc8c8689e12f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.097006 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.102968 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.106063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4c9f\" (UniqueName: \"kubernetes.io/projected/481cbeed-e6fd-4afe-a6af-043a6a06a521-kube-api-access-k4c9f\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2\" (UID: \"481cbeed-e6fd-4afe-a6af-043a6a06a521\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.106266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wrjgj" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.108552 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.113883 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.123866 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.127488 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.128303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.130087 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jngsj" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.143514 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.189017 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fhw\" (UniqueName: \"kubernetes.io/projected/44cc6d95-7c97-4e12-a369-4595f9a540cd-kube-api-access-n8fhw\") pod \"ironic-operator-controller-manager-5bc894d9b-s6578\" (UID: \"44cc6d95-7c97-4e12-a369-4595f9a540cd\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.189086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.191148 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.194733 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.201304 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-j29dj" Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.209814 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.209894 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert podName:9a2eadbd-233b-49a0-b869-de204e01663c nodeName:}" failed. No retries permitted until 2026-03-14 09:15:04.70987503 +0000 UTC m=+1089.698115405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert") pod "infra-operator-controller-manager-54dc5b8f8d-m8hhf" (UID: "9a2eadbd-233b-49a0-b869-de204e01663c") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210178 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwf4\" (UniqueName: \"kubernetes.io/projected/c898928b-40b3-46d7-87e7-dfd483949ed2-kube-api-access-pmwf4\") pod \"neutron-operator-controller-manager-776c5696bf-nsm5m\" (UID: \"c898928b-40b3-46d7-87e7-dfd483949ed2\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bv98\" (UniqueName: \"kubernetes.io/projected/138d02d6-70f7-4310-b250-2756a22333b5-kube-api-access-5bv98\") pod \"octavia-operator-controller-manager-5f4f55cb5c-9cbr4\" (UID: \"138d02d6-70f7-4310-b250-2756a22333b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjks\" (UniqueName: \"kubernetes.io/projected/faf0ec50-97de-40e8-9e7e-c407f08e2de6-kube-api-access-2tjks\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtx4n\" (UniqueName: \"kubernetes.io/projected/9a2eadbd-233b-49a0-b869-de204e01663c-kube-api-access-gtx4n\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv75s\" (UniqueName: \"kubernetes.io/projected/b32556c3-5b91-4f2d-8f1f-6a1b2eae0367-kube-api-access-wv75s\") pod \"nova-operator-controller-manager-7f84474648-htnfl\" (UID: \"b32556c3-5b91-4f2d-8f1f-6a1b2eae0367\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210754 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.210777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9zc\" (UniqueName: \"kubernetes.io/projected/79c0c137-135c-49a1-bd73-20e6325ca1e6-kube-api-access-wg9zc\") pod \"ovn-operator-controller-manager-bbc5b68f9-lf9b6\" (UID: \"79c0c137-135c-49a1-bd73-20e6325ca1e6\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.232000 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.244201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtx4n\" (UniqueName: \"kubernetes.io/projected/9a2eadbd-233b-49a0-b869-de204e01663c-kube-api-access-gtx4n\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.251934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fhw\" (UniqueName: \"kubernetes.io/projected/44cc6d95-7c97-4e12-a369-4595f9a540cd-kube-api-access-n8fhw\") pod \"ironic-operator-controller-manager-5bc894d9b-s6578\" (UID: \"44cc6d95-7c97-4e12-a369-4595f9a540cd\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.260353 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.303664 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.304527 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.308905 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.323925 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jnc8g" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.325231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qgtf\" (UniqueName: \"kubernetes.io/projected/ace7279e-00c3-42fa-8dfd-ff8f3256c6f0-kube-api-access-5qgtf\") pod \"test-operator-controller-manager-5c5cb9c4d7-bwzgn\" (UID: \"ace7279e-00c3-42fa-8dfd-ff8f3256c6f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.325281 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkx84\" (UniqueName: \"kubernetes.io/projected/67b29df1-eced-4bfd-9c8c-24d56f0f880c-kube-api-access-pkx84\") pod \"swift-operator-controller-manager-7f9cc5dd44-t78v7\" (UID: \"67b29df1-eced-4bfd-9c8c-24d56f0f880c\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.325308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9zc\" (UniqueName: \"kubernetes.io/projected/79c0c137-135c-49a1-bd73-20e6325ca1e6-kube-api-access-wg9zc\") pod \"ovn-operator-controller-manager-bbc5b68f9-lf9b6\" (UID: \"79c0c137-135c-49a1-bd73-20e6325ca1e6\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.325327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.326368 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.326408 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:04.826393904 +0000 UTC m=+1089.814634279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.326694 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2zq\" (UniqueName: \"kubernetes.io/projected/5a1032a1-2d67-404d-8461-c84ab72bd2a3-kube-api-access-wj2zq\") pod \"placement-operator-controller-manager-574d45c66c-gb8wb\" (UID: \"5a1032a1-2d67-404d-8461-c84ab72bd2a3\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.326935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmwf4\" (UniqueName: \"kubernetes.io/projected/c898928b-40b3-46d7-87e7-dfd483949ed2-kube-api-access-pmwf4\") pod \"neutron-operator-controller-manager-776c5696bf-nsm5m\" (UID: \"c898928b-40b3-46d7-87e7-dfd483949ed2\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.326959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bv98\" (UniqueName: \"kubernetes.io/projected/138d02d6-70f7-4310-b250-2756a22333b5-kube-api-access-5bv98\") pod \"octavia-operator-controller-manager-5f4f55cb5c-9cbr4\" (UID: \"138d02d6-70f7-4310-b250-2756a22333b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.326983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjks\" (UniqueName: \"kubernetes.io/projected/faf0ec50-97de-40e8-9e7e-c407f08e2de6-kube-api-access-2tjks\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.327022 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv76v\" (UniqueName: \"kubernetes.io/projected/6dea1f44-0092-43b8-9576-b8b64b08d923-kube-api-access-zv76v\") pod \"telemetry-operator-controller-manager-6854b8b9d9-rmdj4\" (UID: \"6dea1f44-0092-43b8-9576-b8b64b08d923\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.327053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv75s\" (UniqueName: \"kubernetes.io/projected/b32556c3-5b91-4f2d-8f1f-6a1b2eae0367-kube-api-access-wv75s\") pod \"nova-operator-controller-manager-7f84474648-htnfl\" (UID: \"b32556c3-5b91-4f2d-8f1f-6a1b2eae0367\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.335666 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.336536 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.341373 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ztvsc" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.341730 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.343385 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.361553 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv75s\" (UniqueName: \"kubernetes.io/projected/b32556c3-5b91-4f2d-8f1f-6a1b2eae0367-kube-api-access-wv75s\") pod \"nova-operator-controller-manager-7f84474648-htnfl\" (UID: \"b32556c3-5b91-4f2d-8f1f-6a1b2eae0367\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.371847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjks\" (UniqueName: \"kubernetes.io/projected/faf0ec50-97de-40e8-9e7e-c407f08e2de6-kube-api-access-2tjks\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.385341 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.386988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmwf4\" (UniqueName: \"kubernetes.io/projected/c898928b-40b3-46d7-87e7-dfd483949ed2-kube-api-access-pmwf4\") pod \"neutron-operator-controller-manager-776c5696bf-nsm5m\" (UID: \"c898928b-40b3-46d7-87e7-dfd483949ed2\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.395218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bv98\" (UniqueName: \"kubernetes.io/projected/138d02d6-70f7-4310-b250-2756a22333b5-kube-api-access-5bv98\") pod \"octavia-operator-controller-manager-5f4f55cb5c-9cbr4\" (UID: \"138d02d6-70f7-4310-b250-2756a22333b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.396145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9zc\" (UniqueName: \"kubernetes.io/projected/79c0c137-135c-49a1-bd73-20e6325ca1e6-kube-api-access-wg9zc\") pod \"ovn-operator-controller-manager-bbc5b68f9-lf9b6\" (UID: \"79c0c137-135c-49a1-bd73-20e6325ca1e6\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.429959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2zq\" (UniqueName: \"kubernetes.io/projected/5a1032a1-2d67-404d-8461-c84ab72bd2a3-kube-api-access-wj2zq\") pod \"placement-operator-controller-manager-574d45c66c-gb8wb\" (UID: \"5a1032a1-2d67-404d-8461-c84ab72bd2a3\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.430007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spwk\" (UniqueName: \"kubernetes.io/projected/19eab663-f7c2-4a2a-923c-a5806353c911-kube-api-access-4spwk\") pod \"watcher-operator-controller-manager-7b8d757b5d-r8fgz\" (UID: \"19eab663-f7c2-4a2a-923c-a5806353c911\") " pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.430044 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv76v\" (UniqueName: \"kubernetes.io/projected/6dea1f44-0092-43b8-9576-b8b64b08d923-kube-api-access-zv76v\") pod \"telemetry-operator-controller-manager-6854b8b9d9-rmdj4\" (UID: \"6dea1f44-0092-43b8-9576-b8b64b08d923\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.430068 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qgtf\" (UniqueName: \"kubernetes.io/projected/ace7279e-00c3-42fa-8dfd-ff8f3256c6f0-kube-api-access-5qgtf\") pod \"test-operator-controller-manager-5c5cb9c4d7-bwzgn\" (UID: \"ace7279e-00c3-42fa-8dfd-ff8f3256c6f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.430097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkx84\" (UniqueName: \"kubernetes.io/projected/67b29df1-eced-4bfd-9c8c-24d56f0f880c-kube-api-access-pkx84\") pod \"swift-operator-controller-manager-7f9cc5dd44-t78v7\" (UID: \"67b29df1-eced-4bfd-9c8c-24d56f0f880c\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.447676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.460577 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.461219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkx84\" (UniqueName: \"kubernetes.io/projected/67b29df1-eced-4bfd-9c8c-24d56f0f880c-kube-api-access-pkx84\") pod \"swift-operator-controller-manager-7f9cc5dd44-t78v7\" (UID: \"67b29df1-eced-4bfd-9c8c-24d56f0f880c\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.471084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2zq\" (UniqueName: \"kubernetes.io/projected/5a1032a1-2d67-404d-8461-c84ab72bd2a3-kube-api-access-wj2zq\") pod \"placement-operator-controller-manager-574d45c66c-gb8wb\" (UID: \"5a1032a1-2d67-404d-8461-c84ab72bd2a3\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.485508 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.489032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qgtf\" (UniqueName: \"kubernetes.io/projected/ace7279e-00c3-42fa-8dfd-ff8f3256c6f0-kube-api-access-5qgtf\") pod \"test-operator-controller-manager-5c5cb9c4d7-bwzgn\" (UID: \"ace7279e-00c3-42fa-8dfd-ff8f3256c6f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.489622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv76v\" (UniqueName: \"kubernetes.io/projected/6dea1f44-0092-43b8-9576-b8b64b08d923-kube-api-access-zv76v\") pod \"telemetry-operator-controller-manager-6854b8b9d9-rmdj4\" (UID: \"6dea1f44-0092-43b8-9576-b8b64b08d923\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.523798 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.532855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spwk\" (UniqueName: \"kubernetes.io/projected/19eab663-f7c2-4a2a-923c-a5806353c911-kube-api-access-4spwk\") pod \"watcher-operator-controller-manager-7b8d757b5d-r8fgz\" (UID: \"19eab663-f7c2-4a2a-923c-a5806353c911\") " pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.533218 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.542086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.553604 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.553997 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.555002 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.557891 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.561433 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.561592 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.561702 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-p7724" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.561890 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.574111 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spwk\" (UniqueName: \"kubernetes.io/projected/19eab663-f7c2-4a2a-923c-a5806353c911-kube-api-access-4spwk\") pod \"watcher-operator-controller-manager-7b8d757b5d-r8fgz\" (UID: \"19eab663-f7c2-4a2a-923c-a5806353c911\") " pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.606874 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.634131 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479nq\" (UniqueName: \"kubernetes.io/projected/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-kube-api-access-479nq\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.634189 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.634221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.644498 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.652183 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.653250 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq"] Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.655646 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xfnst" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.684164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:15:04 crc kubenswrapper[4687]: W0314 09:15:04.701681 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b416da_f3c4_43f1_a91a_14dac5c1cf25.slice/crio-3d3db83ee95ab002942634fd44f4ebd5a49f010e11a209e083b18a5f88560cb0 WatchSource:0}: Error finding container 3d3db83ee95ab002942634fd44f4ebd5a49f010e11a209e083b18a5f88560cb0: Status 404 returned error can't find the container with id 3d3db83ee95ab002942634fd44f4ebd5a49f010e11a209e083b18a5f88560cb0 Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.735571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.735925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcn5w\" (UniqueName: \"kubernetes.io/projected/50752c5a-72da-4aa5-838e-1b8f7d3ccb04-kube-api-access-hcn5w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j7pgq\" (UID: \"50752c5a-72da-4aa5-838e-1b8f7d3ccb04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.735976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479nq\" (UniqueName: \"kubernetes.io/projected/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-kube-api-access-479nq\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.736034 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.736066 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736325 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736388 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:05.236372993 +0000 UTC m=+1090.224613358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736389 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736454 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert podName:9a2eadbd-233b-49a0-b869-de204e01663c nodeName:}" failed. No retries permitted until 2026-03-14 09:15:05.736436274 +0000 UTC m=+1090.724676649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert") pod "infra-operator-controller-manager-54dc5b8f8d-m8hhf" (UID: "9a2eadbd-233b-49a0-b869-de204e01663c") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736501 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.736532 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:05.236523716 +0000 UTC m=+1090.224764091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.749682 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.757956 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479nq\" (UniqueName: \"kubernetes.io/projected/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-kube-api-access-479nq\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.840884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcn5w\" (UniqueName: \"kubernetes.io/projected/50752c5a-72da-4aa5-838e-1b8f7d3ccb04-kube-api-access-hcn5w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j7pgq\" (UID: \"50752c5a-72da-4aa5-838e-1b8f7d3ccb04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.840965 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.841163 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: E0314 09:15:04.841206 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:05.841192577 +0000 UTC m=+1090.829432952 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:04 crc kubenswrapper[4687]: I0314 09:15:04.871198 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcn5w\" (UniqueName: \"kubernetes.io/projected/50752c5a-72da-4aa5-838e-1b8f7d3ccb04-kube-api-access-hcn5w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j7pgq\" (UID: \"50752c5a-72da-4aa5-838e-1b8f7d3ccb04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.043593 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.123275 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn"] Mar 14 09:15:05 crc kubenswrapper[4687]: W0314 09:15:05.137047 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76c5b65_d7d1_4986_90fe_dab7724bc142.slice/crio-99bd18bf7f3250aa6b2d4becf4dfdbe907ff3dc487858982320d9e6d21ac57c9 WatchSource:0}: Error finding container 99bd18bf7f3250aa6b2d4becf4dfdbe907ff3dc487858982320d9e6d21ac57c9: Status 404 returned error can't find the container with id 99bd18bf7f3250aa6b2d4becf4dfdbe907ff3dc487858982320d9e6d21ac57c9 Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.183065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-426nr"] Mar 14 09:15:05 crc kubenswrapper[4687]: W0314 09:15:05.185455 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc39ee13_24c1_4c4d_9aed_3ce11c3eceb9.slice/crio-63d0f94293a7994b0dd2477582ea03b2fec5e064d8a8c80ae193b0fdbee397da WatchSource:0}: Error finding container 63d0f94293a7994b0dd2477582ea03b2fec5e064d8a8c80ae193b0fdbee397da: Status 404 returned error can't find the container with id 63d0f94293a7994b0dd2477582ea03b2fec5e064d8a8c80ae193b0fdbee397da Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.247593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.247650 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.247811 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.247901 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:06.247879964 +0000 UTC m=+1091.236120419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.247929 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.248019 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:06.248000507 +0000 UTC m=+1091.236240872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.495685 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" event={"ID":"bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9","Type":"ContainerStarted","Data":"63d0f94293a7994b0dd2477582ea03b2fec5e064d8a8c80ae193b0fdbee397da"} Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.496848 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" event={"ID":"56b6df44-ea65-46b4-93da-67d70a3769b1","Type":"ContainerStarted","Data":"fea5b24a95654ae1c1c69840da6121abc155b9b9ed62870b238097dade0d8f7b"} Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.498427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" event={"ID":"52b416da-f3c4-43f1-a91a-14dac5c1cf25","Type":"ContainerStarted","Data":"3d3db83ee95ab002942634fd44f4ebd5a49f010e11a209e083b18a5f88560cb0"} Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.499947 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" event={"ID":"e76c5b65-d7d1-4986-90fe-dab7724bc142","Type":"ContainerStarted","Data":"99bd18bf7f3250aa6b2d4becf4dfdbe907ff3dc487858982320d9e6d21ac57c9"} Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.545698 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.559717 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.593783 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.623508 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.626819 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-htnfl"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.632506 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.639281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.649052 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.661708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6"] Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.665549 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg9zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-lf9b6_openstack-operators(79c0c137-135c-49a1-bd73-20e6325ca1e6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.666729 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" podUID="79c0c137-135c-49a1-bd73-20e6325ca1e6" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.672591 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.680029 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.754326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.754468 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.754531 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert podName:9a2eadbd-233b-49a0-b869-de204e01663c nodeName:}" failed. No retries permitted until 2026-03-14 09:15:07.754511875 +0000 UTC m=+1092.742752250 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert") pod "infra-operator-controller-manager-54dc5b8f8d-m8hhf" (UID: "9a2eadbd-233b-49a0-b869-de204e01663c") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: W0314 09:15:05.827011 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod138d02d6_70f7_4310_b250_2756a22333b5.slice/crio-396038e97c881825c9ff8922eaf0a67fcef3e1ce1a25cd54d9d51c7ddd91f0a9 WatchSource:0}: Error finding container 396038e97c881825c9ff8922eaf0a67fcef3e1ce1a25cd54d9d51c7ddd91f0a9: Status 404 returned error can't find the container with id 396038e97c881825c9ff8922eaf0a67fcef3e1ce1a25cd54d9d51c7ddd91f0a9 Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.832458 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5bv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-9cbr4_openstack-operators(138d02d6-70f7-4310-b250-2756a22333b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.833728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" podUID="138d02d6-70f7-4310-b250-2756a22333b5" Mar 14 09:15:05 crc kubenswrapper[4687]: W0314 09:15:05.840188 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace7279e_00c3_42fa_8dfd_ff8f3256c6f0.slice/crio-4868318e691ee60711df4b232a27d395a7272c247b99d3355827c3a008795c95 WatchSource:0}: Error finding container 4868318e691ee60711df4b232a27d395a7272c247b99d3355827c3a008795c95: Status 404 returned error can't find the container with id 4868318e691ee60711df4b232a27d395a7272c247b99d3355827c3a008795c95 Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.843299 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qgtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-bwzgn_openstack-operators(ace7279e-00c3-42fa-8dfd-ff8f3256c6f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.846307 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podUID="ace7279e-00c3-42fa-8dfd-ff8f3256c6f0" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.852563 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.855101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.855310 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.855373 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:07.855358672 +0000 UTC m=+1092.843599047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.856492 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4spwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b8d757b5d-r8fgz_openstack-operators(19eab663-f7c2-4a2a-923c-a5806353c911): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.857950 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podUID="19eab663-f7c2-4a2a-923c-a5806353c911" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.861884 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.872780 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn"] Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.876714 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcn5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j7pgq_openstack-operators(50752c5a-72da-4aa5-838e-1b8f7d3ccb04): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:05 crc kubenswrapper[4687]: E0314 09:15:05.878822 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podUID="50752c5a-72da-4aa5-838e-1b8f7d3ccb04" Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.879772 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq"] Mar 14 09:15:05 crc kubenswrapper[4687]: I0314 09:15:05.895776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz"] Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.269200 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.269261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.269484 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.269570 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:08.269553215 +0000 UTC m=+1093.257793590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.269619 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.269639 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:08.269633087 +0000 UTC m=+1093.257873462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.527469 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" event={"ID":"59360bb4-cf7e-41a7-b78e-0615b4cd15e4","Type":"ContainerStarted","Data":"a91fccd1a47144a1cf63d788c45d092b3e4753b6b47a4cd4ce19d756cea1a8f1"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.530083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" event={"ID":"ace7279e-00c3-42fa-8dfd-ff8f3256c6f0","Type":"ContainerStarted","Data":"4868318e691ee60711df4b232a27d395a7272c247b99d3355827c3a008795c95"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.532137 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" event={"ID":"79c0c137-135c-49a1-bd73-20e6325ca1e6","Type":"ContainerStarted","Data":"410a8372b62e8a9f076f881202c3fab6feef59d5d1e8798d3cf8ebb1bac63ac8"} Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.533122 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podUID="ace7279e-00c3-42fa-8dfd-ff8f3256c6f0" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.536178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" event={"ID":"6dea1f44-0092-43b8-9576-b8b64b08d923","Type":"ContainerStarted","Data":"fb9821299f58da2262287fac3b89825a90bfa4dd5f292ed3b71b4b50d5c37ed6"} Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.540653 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" podUID="79c0c137-135c-49a1-bd73-20e6325ca1e6" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.542581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" event={"ID":"40a7f8a7-9a5c-4607-a685-747c0fc779b5","Type":"ContainerStarted","Data":"afd6e7931effec837f336866516eb5f2501c21794b5e61c74abd2ecf37d20274"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.548126 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" event={"ID":"50752c5a-72da-4aa5-838e-1b8f7d3ccb04","Type":"ContainerStarted","Data":"3966f8261969617fb587c8f5750dce0e7b076cffa67f00335a80bcc418ab4f79"} Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.553022 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podUID="50752c5a-72da-4aa5-838e-1b8f7d3ccb04" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.556017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" event={"ID":"b32556c3-5b91-4f2d-8f1f-6a1b2eae0367","Type":"ContainerStarted","Data":"ec9505baa63e3520c8ccecae25ab620fcb6d1ec062a3f89277ca33b8c8d522eb"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.568186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" event={"ID":"67b29df1-eced-4bfd-9c8c-24d56f0f880c","Type":"ContainerStarted","Data":"eb4f70d337d7c6ca877fa16e8d19e00445e7bdc8980cdd92c0da13db9c0b0cb1"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.569749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" event={"ID":"5a1032a1-2d67-404d-8461-c84ab72bd2a3","Type":"ContainerStarted","Data":"a5d4bed98703c09936650d82148ebdb5166528b4124b0a6d084b346a8ac21a4a"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.571172 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" event={"ID":"481cbeed-e6fd-4afe-a6af-043a6a06a521","Type":"ContainerStarted","Data":"956309f8add5d2c6d29f6e665c04a76f01a743b0b532880eb38cac8a4dfca96a"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.585294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" event={"ID":"19eab663-f7c2-4a2a-923c-a5806353c911","Type":"ContainerStarted","Data":"2a24fc5fa4c110a71fba427232a6eab8bdfdfebd6e7b49085320cee48c6837d8"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.599743 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" event={"ID":"6de94ed3-f1ea-4cb3-88a7-c78a0af38830","Type":"ContainerStarted","Data":"6749dd1aed1c3bb8103e5a8c073b17380054d00a82fcd694e55d563f8e09345a"} Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.607605 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podUID="19eab663-f7c2-4a2a-923c-a5806353c911" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.612205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" event={"ID":"138d02d6-70f7-4310-b250-2756a22333b5","Type":"ContainerStarted","Data":"396038e97c881825c9ff8922eaf0a67fcef3e1ce1a25cd54d9d51c7ddd91f0a9"} Mar 14 09:15:06 crc kubenswrapper[4687]: E0314 09:15:06.621616 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" podUID="138d02d6-70f7-4310-b250-2756a22333b5" Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.622482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" event={"ID":"44cc6d95-7c97-4e12-a369-4595f9a540cd","Type":"ContainerStarted","Data":"3a3bfad9ba3bad2891d4fc25b814adebdcab799695cf446d6f5b838cd4845e1c"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.623826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" event={"ID":"18179c6b-b84c-4bd5-b077-fc8c8689e12f","Type":"ContainerStarted","Data":"33816eb56fbcd613d368f1ed2849ee88d6432aff2d1fe0977bee5d7827cc91a0"} Mar 14 09:15:06 crc kubenswrapper[4687]: I0314 09:15:06.625561 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" event={"ID":"c898928b-40b3-46d7-87e7-dfd483949ed2","Type":"ContainerStarted","Data":"75094daccf624833e30439945511a3754e9af9a5b4fd99967378db666b714583"} Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.657887 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" podUID="138d02d6-70f7-4310-b250-2756a22333b5" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.658074 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podUID="ace7279e-00c3-42fa-8dfd-ff8f3256c6f0" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.658121 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podUID="50752c5a-72da-4aa5-838e-1b8f7d3ccb04" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.658144 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podUID="19eab663-f7c2-4a2a-923c-a5806353c911" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.669398 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" podUID="79c0c137-135c-49a1-bd73-20e6325ca1e6" Mar 14 09:15:07 crc kubenswrapper[4687]: I0314 09:15:07.797420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.798999 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.799056 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert podName:9a2eadbd-233b-49a0-b869-de204e01663c nodeName:}" failed. No retries permitted until 2026-03-14 09:15:11.799039305 +0000 UTC m=+1096.787279680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert") pod "infra-operator-controller-manager-54dc5b8f8d-m8hhf" (UID: "9a2eadbd-233b-49a0-b869-de204e01663c") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:07 crc kubenswrapper[4687]: I0314 09:15:07.898730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.900492 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:07 crc kubenswrapper[4687]: E0314 09:15:07.900582 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:11.900554998 +0000 UTC m=+1096.888795373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:08 crc kubenswrapper[4687]: I0314 09:15:08.307641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:08 crc kubenswrapper[4687]: I0314 09:15:08.307711 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:08 crc kubenswrapper[4687]: E0314 09:15:08.307915 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:08 crc kubenswrapper[4687]: E0314 09:15:08.307974 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:12.307954793 +0000 UTC m=+1097.296195168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:08 crc kubenswrapper[4687]: E0314 09:15:08.308033 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:08 crc kubenswrapper[4687]: E0314 09:15:08.308062 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:12.308051706 +0000 UTC m=+1097.296292081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:11 crc kubenswrapper[4687]: I0314 09:15:11.859184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:11 crc kubenswrapper[4687]: E0314 09:15:11.859362 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:11 crc kubenswrapper[4687]: E0314 09:15:11.859635 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert podName:9a2eadbd-233b-49a0-b869-de204e01663c nodeName:}" failed. No retries permitted until 2026-03-14 09:15:19.859620311 +0000 UTC m=+1104.847860686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert") pod "infra-operator-controller-manager-54dc5b8f8d-m8hhf" (UID: "9a2eadbd-233b-49a0-b869-de204e01663c") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:11 crc kubenswrapper[4687]: I0314 09:15:11.961680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:11 crc kubenswrapper[4687]: E0314 09:15:11.961890 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:11 crc kubenswrapper[4687]: E0314 09:15:11.961993 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:19.961974905 +0000 UTC m=+1104.950215280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:12 crc kubenswrapper[4687]: I0314 09:15:12.368818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:12 crc kubenswrapper[4687]: I0314 09:15:12.368886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:12 crc kubenswrapper[4687]: E0314 09:15:12.369025 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:12 crc kubenswrapper[4687]: E0314 09:15:12.369082 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.369065192 +0000 UTC m=+1105.357305567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:12 crc kubenswrapper[4687]: E0314 09:15:12.369218 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:12 crc kubenswrapper[4687]: E0314 09:15:12.369267 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.369253577 +0000 UTC m=+1105.357493952 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:18 crc kubenswrapper[4687]: E0314 09:15:18.730198 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 14 09:15:18 crc kubenswrapper[4687]: E0314 09:15:18.730832 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nqzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-cqznw_openstack-operators(59360bb4-cf7e-41a7-b78e-0615b4cd15e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:18 crc kubenswrapper[4687]: E0314 09:15:18.732514 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" podUID="59360bb4-cf7e-41a7-b78e-0615b4cd15e4" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.276168 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.276579 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wv75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f84474648-htnfl_openstack-operators(b32556c3-5b91-4f2d-8f1f-6a1b2eae0367): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.278037 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" podUID="b32556c3-5b91-4f2d-8f1f-6a1b2eae0367" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.753423 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" podUID="59360bb4-cf7e-41a7-b78e-0615b4cd15e4" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.753430 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" podUID="b32556c3-5b91-4f2d-8f1f-6a1b2eae0367" Mar 14 09:15:19 crc kubenswrapper[4687]: I0314 09:15:19.876519 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:19 crc kubenswrapper[4687]: I0314 09:15:19.888155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2eadbd-233b-49a0-b869-de204e01663c-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-m8hhf\" (UID: \"9a2eadbd-233b-49a0-b869-de204e01663c\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:19 crc kubenswrapper[4687]: I0314 09:15:19.977987 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.978124 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:19 crc kubenswrapper[4687]: E0314 09:15:19.978196 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert podName:faf0ec50-97de-40e8-9e7e-c407f08e2de6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:35.978177731 +0000 UTC m=+1120.966418106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" (UID: "faf0ec50-97de-40e8-9e7e-c407f08e2de6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.181723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.400870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.400932 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:20 crc kubenswrapper[4687]: E0314 09:15:20.401087 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4687]: E0314 09:15:20.401135 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:36.401119359 +0000 UTC m=+1121.389359734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4687]: E0314 09:15:20.401435 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4687]: E0314 09:15:20.401524 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs podName:b608e71f-c0d4-463a-ba9f-a6becc4f54b6 nodeName:}" failed. No retries permitted until 2026-03-14 09:15:36.401505448 +0000 UTC m=+1121.389745823 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs") pod "openstack-operator-controller-manager-59cdb7596d-w9jwl" (UID: "b608e71f-c0d4-463a-ba9f-a6becc4f54b6") : secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.758120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" event={"ID":"6dea1f44-0092-43b8-9576-b8b64b08d923","Type":"ContainerStarted","Data":"8d1d207980a4c5ab8b617e363c20cbfd52eb7576aaf4aae1d9260159cf403edc"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.758456 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.774501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" event={"ID":"40a7f8a7-9a5c-4607-a685-747c0fc779b5","Type":"ContainerStarted","Data":"6ef04d4aa4dd965ccfb86ed3c714b079aedc294cccf88fbfcaa5d564b034543f"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.775124 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.776694 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" event={"ID":"56b6df44-ea65-46b4-93da-67d70a3769b1","Type":"ContainerStarted","Data":"658cdccdcee3068372b0296df8536c771d0cd1279730f170567862b3b219b937"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.777106 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.797086 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" podStartSLOduration=3.18017842 podStartE2EDuration="16.797065482s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.653781222 +0000 UTC m=+1090.642021597" lastFinishedPulling="2026-03-14 09:15:19.270668264 +0000 UTC m=+1104.258908659" observedRunningTime="2026-03-14 09:15:20.782997356 +0000 UTC m=+1105.771237731" watchObservedRunningTime="2026-03-14 09:15:20.797065482 +0000 UTC m=+1105.785305857" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.806028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" event={"ID":"18179c6b-b84c-4bd5-b077-fc8c8689e12f","Type":"ContainerStarted","Data":"5c7fa44761f4a09a3d0a434c6b82e20d33831cb4131e132bb8a158881f17e416"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.806666 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.812929 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" podStartSLOduration=3.138468582 podStartE2EDuration="17.812912822s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:04.595187948 +0000 UTC m=+1089.583428323" lastFinishedPulling="2026-03-14 09:15:19.269632168 +0000 UTC m=+1104.257872563" observedRunningTime="2026-03-14 09:15:20.809793875 +0000 UTC m=+1105.798034250" watchObservedRunningTime="2026-03-14 09:15:20.812912822 +0000 UTC m=+1105.801153197" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.826129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" event={"ID":"e76c5b65-d7d1-4986-90fe-dab7724bc142","Type":"ContainerStarted","Data":"1cae70433e135837f30fa17ecaa072a59c4bb775c27912acf9799539fda99a76"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.826195 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.831647 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" podStartSLOduration=4.216349119 podStartE2EDuration="17.831629032s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.653362532 +0000 UTC m=+1090.641602907" lastFinishedPulling="2026-03-14 09:15:19.268642445 +0000 UTC m=+1104.256882820" observedRunningTime="2026-03-14 09:15:20.82706389 +0000 UTC m=+1105.815304255" watchObservedRunningTime="2026-03-14 09:15:20.831629032 +0000 UTC m=+1105.819869427" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.837248 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" event={"ID":"6de94ed3-f1ea-4cb3-88a7-c78a0af38830","Type":"ContainerStarted","Data":"2f1f7b3563dc85ebe7c2504c8e947ff3bd6468ed6827752cf373e09275969d5a"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.837893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.838967 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" event={"ID":"c898928b-40b3-46d7-87e7-dfd483949ed2","Type":"ContainerStarted","Data":"8166387d66b68f33242d58461e4e4aadfbff2d37608fbf0085eed7e1f2020c1a"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.839281 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.840970 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" event={"ID":"67b29df1-eced-4bfd-9c8c-24d56f0f880c","Type":"ContainerStarted","Data":"808b1368756e332bb1817a67a92131b1d4e74db6d9657edfbeb2159461abbfd8"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.841289 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.845932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" event={"ID":"44cc6d95-7c97-4e12-a369-4595f9a540cd","Type":"ContainerStarted","Data":"0098c060068126080d129b486813d9463fc3a46bb8b76af51b8fb8c37c74c40e"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.845994 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.847699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" event={"ID":"bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9","Type":"ContainerStarted","Data":"bd9252f3422b98fc2574373583af5c9337617925a29d2bbc52bd1d9616da884c"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.848196 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.850448 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" podStartSLOduration=4.186310782 podStartE2EDuration="17.850439334s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.583553074 +0000 UTC m=+1090.571793449" lastFinishedPulling="2026-03-14 09:15:19.247681626 +0000 UTC m=+1104.235922001" observedRunningTime="2026-03-14 09:15:20.841579626 +0000 UTC m=+1105.829820001" watchObservedRunningTime="2026-03-14 09:15:20.850439334 +0000 UTC m=+1105.838679709" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.853765 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" event={"ID":"52b416da-f3c4-43f1-a91a-14dac5c1cf25","Type":"ContainerStarted","Data":"bee9952150170eb31d7708f1034982fdc921313f17e51800c7dbf5d51560adf3"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.854011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.857817 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" podStartSLOduration=3.751011516 podStartE2EDuration="17.857803886s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.1398537 +0000 UTC m=+1090.128094075" lastFinishedPulling="2026-03-14 09:15:19.24664605 +0000 UTC m=+1104.234886445" observedRunningTime="2026-03-14 09:15:20.857147899 +0000 UTC m=+1105.845388264" watchObservedRunningTime="2026-03-14 09:15:20.857803886 +0000 UTC m=+1105.846044251" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.868490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" event={"ID":"5a1032a1-2d67-404d-8461-c84ab72bd2a3","Type":"ContainerStarted","Data":"60221465bc3c32be10e4a82c362fe0b6e87db6b412da2e9f8377d55ff07114e4"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.869071 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.884776 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" event={"ID":"481cbeed-e6fd-4afe-a6af-043a6a06a521","Type":"ContainerStarted","Data":"f5b9985bb1b397b28c8dc20188f2b1b03cfad604ec89c3c6241ae113e11ad2a5"} Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.885062 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" podStartSLOduration=3.292051533 podStartE2EDuration="16.885043355s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.652983022 +0000 UTC m=+1090.641223397" lastFinishedPulling="2026-03-14 09:15:19.245974844 +0000 UTC m=+1104.234215219" observedRunningTime="2026-03-14 09:15:20.881826176 +0000 UTC m=+1105.870066561" watchObservedRunningTime="2026-03-14 09:15:20.885043355 +0000 UTC m=+1105.873283730" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.885445 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.921504 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" podStartSLOduration=3.3907486860000002 podStartE2EDuration="17.92148051s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:04.714225814 +0000 UTC m=+1089.702466189" lastFinishedPulling="2026-03-14 09:15:19.244957628 +0000 UTC m=+1104.233198013" observedRunningTime="2026-03-14 09:15:20.920602939 +0000 UTC m=+1105.908843314" watchObservedRunningTime="2026-03-14 09:15:20.92148051 +0000 UTC m=+1105.909720885" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.949452 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf"] Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.949510 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" podStartSLOduration=4.272713993 podStartE2EDuration="17.949494669s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.59190917 +0000 UTC m=+1090.580149545" lastFinishedPulling="2026-03-14 09:15:19.268689846 +0000 UTC m=+1104.256930221" observedRunningTime="2026-03-14 09:15:20.946094106 +0000 UTC m=+1105.934334481" watchObservedRunningTime="2026-03-14 09:15:20.949494669 +0000 UTC m=+1105.937735044" Mar 14 09:15:20 crc kubenswrapper[4687]: I0314 09:15:20.982697 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" podStartSLOduration=4.308419183 podStartE2EDuration="17.982678385s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.572740406 +0000 UTC m=+1090.560980781" lastFinishedPulling="2026-03-14 09:15:19.246999608 +0000 UTC m=+1104.235239983" observedRunningTime="2026-03-14 09:15:20.979731643 +0000 UTC m=+1105.967972018" watchObservedRunningTime="2026-03-14 09:15:20.982678385 +0000 UTC m=+1105.970918760" Mar 14 09:15:21 crc kubenswrapper[4687]: I0314 09:15:21.004777 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" podStartSLOduration=3.946268055 podStartE2EDuration="18.004763498s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.188594977 +0000 UTC m=+1090.176835362" lastFinishedPulling="2026-03-14 09:15:19.24709039 +0000 UTC m=+1104.235330805" observedRunningTime="2026-03-14 09:15:21.001556319 +0000 UTC m=+1105.989796694" watchObservedRunningTime="2026-03-14 09:15:21.004763498 +0000 UTC m=+1105.993003873" Mar 14 09:15:21 crc kubenswrapper[4687]: I0314 09:15:21.031694 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" podStartSLOduration=4.438484895 podStartE2EDuration="18.03167995s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.653393393 +0000 UTC m=+1090.641633768" lastFinishedPulling="2026-03-14 09:15:19.246588438 +0000 UTC m=+1104.234828823" observedRunningTime="2026-03-14 09:15:21.026295427 +0000 UTC m=+1106.014535802" watchObservedRunningTime="2026-03-14 09:15:21.03167995 +0000 UTC m=+1106.019920325" Mar 14 09:15:21 crc kubenswrapper[4687]: I0314 09:15:21.046067 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" podStartSLOduration=4.3408993559999995 podStartE2EDuration="18.046051873s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.565551488 +0000 UTC m=+1090.553791863" lastFinishedPulling="2026-03-14 09:15:19.270704005 +0000 UTC m=+1104.258944380" observedRunningTime="2026-03-14 09:15:21.041479221 +0000 UTC m=+1106.029719596" watchObservedRunningTime="2026-03-14 09:15:21.046051873 +0000 UTC m=+1106.034292248" Mar 14 09:15:21 crc kubenswrapper[4687]: I0314 09:15:21.058681 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" podStartSLOduration=3.6017151800000002 podStartE2EDuration="17.058664523s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.830103687 +0000 UTC m=+1090.818344062" lastFinishedPulling="2026-03-14 09:15:19.28705303 +0000 UTC m=+1104.275293405" observedRunningTime="2026-03-14 09:15:21.057525485 +0000 UTC m=+1106.045765860" watchObservedRunningTime="2026-03-14 09:15:21.058664523 +0000 UTC m=+1106.046904898" Mar 14 09:15:21 crc kubenswrapper[4687]: I0314 09:15:21.894380 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" event={"ID":"9a2eadbd-233b-49a0-b869-de204e01663c","Type":"ContainerStarted","Data":"1d73afcbdce524fbb2aa0413147f97cf5cbe7902d82e31776a20c728675857ab"} Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.015548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-d47688694-v6wjr" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.035936 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-bb2q4" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.064376 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wghgn" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.091111 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-rblcv" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.117088 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-nbctg" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.118996 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-426nr" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.128737 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.263774 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pttvx" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.344565 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-s6578" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.450283 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-nsm5m" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.545263 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gb8wb" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.557396 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-t78v7" Mar 14 09:15:24 crc kubenswrapper[4687]: I0314 09:15:24.567377 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-rmdj4" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.332321 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.333110 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qgtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-bwzgn_openstack-operators(ace7279e-00c3-42fa-8dfd-ff8f3256c6f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.334394 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podUID="ace7279e-00c3-42fa-8dfd-ff8f3256c6f0" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.733097 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:90974abe83ad3eed4e7f766dbb41b2ee3e2494a1494a00368fb583017edcda04" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.733272 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:90974abe83ad3eed4e7f766dbb41b2ee3e2494a1494a00368fb583017edcda04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtx4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-54dc5b8f8d-m8hhf_openstack-operators(9a2eadbd-233b-49a0-b869-de204e01663c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:32 crc kubenswrapper[4687]: E0314 09:15:32.734401 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" podUID="9a2eadbd-233b-49a0-b869-de204e01663c" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:32.993109 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:90974abe83ad3eed4e7f766dbb41b2ee3e2494a1494a00368fb583017edcda04\\\"\"" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" podUID="9a2eadbd-233b-49a0-b869-de204e01663c" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.489039 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.489219 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcn5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j7pgq_openstack-operators(50752c5a-72da-4aa5-838e-1b8f7d3ccb04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.490419 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podUID="50752c5a-72da-4aa5-838e-1b8f7d3ccb04" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.986322 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.986650 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.986787 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4spwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b8d757b5d-r8fgz_openstack-operators(19eab663-f7c2-4a2a-923c-a5806353c911): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:33 crc kubenswrapper[4687]: E0314 09:15:33.988276 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podUID="19eab663-f7c2-4a2a-923c-a5806353c911" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.006438 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" event={"ID":"59360bb4-cf7e-41a7-b78e-0615b4cd15e4","Type":"ContainerStarted","Data":"6a5480fe4066ea91141848d63f72793744f30b3b717cb498c771ba8f8deee1b6"} Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.008002 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" event={"ID":"138d02d6-70f7-4310-b250-2756a22333b5","Type":"ContainerStarted","Data":"ddb8b0bbef277d5bcf57eb283378b889d1d69e89c3200572d6c42ef6232f9cfc"} Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.008198 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.009408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" event={"ID":"b32556c3-5b91-4f2d-8f1f-6a1b2eae0367","Type":"ContainerStarted","Data":"14b805754048b0590f35acc7df0d4f756b1e7752871d0ec0e524c489b1691194"} Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.009608 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.011158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" event={"ID":"79c0c137-135c-49a1-bd73-20e6325ca1e6","Type":"ContainerStarted","Data":"5e1b07532c27e5d192a53c48c883c31e79f2d69e4d7bc60dd716e4e74a26cac5"} Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.011354 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.021869 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" podStartSLOduration=3.44772233 podStartE2EDuration="32.021851086s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.595999702 +0000 UTC m=+1090.584240077" lastFinishedPulling="2026-03-14 09:15:34.170128458 +0000 UTC m=+1119.158368833" observedRunningTime="2026-03-14 09:15:35.019323973 +0000 UTC m=+1120.007564348" watchObservedRunningTime="2026-03-14 09:15:35.021851086 +0000 UTC m=+1120.010091461" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.034324 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" podStartSLOduration=3.857300978 podStartE2EDuration="32.034304172s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.829916692 +0000 UTC m=+1090.818157067" lastFinishedPulling="2026-03-14 09:15:34.006919886 +0000 UTC m=+1118.995160261" observedRunningTime="2026-03-14 09:15:35.032617241 +0000 UTC m=+1120.020857626" watchObservedRunningTime="2026-03-14 09:15:35.034304172 +0000 UTC m=+1120.022544547" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.048837 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" podStartSLOduration=2.684886091 podStartE2EDuration="31.048819498s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.665394469 +0000 UTC m=+1090.653634844" lastFinishedPulling="2026-03-14 09:15:34.029327876 +0000 UTC m=+1119.017568251" observedRunningTime="2026-03-14 09:15:35.043491887 +0000 UTC m=+1120.031732262" watchObservedRunningTime="2026-03-14 09:15:35.048819498 +0000 UTC m=+1120.037059873" Mar 14 09:15:35 crc kubenswrapper[4687]: I0314 09:15:35.057003 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" podStartSLOduration=3.652603688 podStartE2EDuration="32.056993149s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.604522923 +0000 UTC m=+1090.592763298" lastFinishedPulling="2026-03-14 09:15:34.008912394 +0000 UTC m=+1118.997152759" observedRunningTime="2026-03-14 09:15:35.055628566 +0000 UTC m=+1120.043868941" watchObservedRunningTime="2026-03-14 09:15:35.056993149 +0000 UTC m=+1120.045233524" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.029001 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.034627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faf0ec50-97de-40e8-9e7e-c407f08e2de6-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4\" (UID: \"faf0ec50-97de-40e8-9e7e-c407f08e2de6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.293445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.435682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.436078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.450964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-webhook-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.452380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b608e71f-c0d4-463a-ba9f-a6becc4f54b6-metrics-certs\") pod \"openstack-operator-controller-manager-59cdb7596d-w9jwl\" (UID: \"b608e71f-c0d4-463a-ba9f-a6becc4f54b6\") " pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.585221 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4"] Mar 14 09:15:36 crc kubenswrapper[4687]: W0314 09:15:36.586223 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf0ec50_97de_40e8_9e7e_c407f08e2de6.slice/crio-76b596c5e0c82e3599313b232329379098f5318bdf713be951c5ed9ae0248903 WatchSource:0}: Error finding container 76b596c5e0c82e3599313b232329379098f5318bdf713be951c5ed9ae0248903: Status 404 returned error can't find the container with id 76b596c5e0c82e3599313b232329379098f5318bdf713be951c5ed9ae0248903 Mar 14 09:15:36 crc kubenswrapper[4687]: I0314 09:15:36.745534 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:37 crc kubenswrapper[4687]: I0314 09:15:37.035982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" event={"ID":"faf0ec50-97de-40e8-9e7e-c407f08e2de6","Type":"ContainerStarted","Data":"76b596c5e0c82e3599313b232329379098f5318bdf713be951c5ed9ae0248903"} Mar 14 09:15:37 crc kubenswrapper[4687]: I0314 09:15:37.170389 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl"] Mar 14 09:15:37 crc kubenswrapper[4687]: W0314 09:15:37.174751 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb608e71f_c0d4_463a_ba9f_a6becc4f54b6.slice/crio-73ce54c71387fe36a78b1af28851e6506c98ae56136a07575004ee568554ebc5 WatchSource:0}: Error finding container 73ce54c71387fe36a78b1af28851e6506c98ae56136a07575004ee568554ebc5: Status 404 returned error can't find the container with id 73ce54c71387fe36a78b1af28851e6506c98ae56136a07575004ee568554ebc5 Mar 14 09:15:38 crc kubenswrapper[4687]: I0314 09:15:38.045925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" event={"ID":"b608e71f-c0d4-463a-ba9f-a6becc4f54b6","Type":"ContainerStarted","Data":"94c81766ad018a8b3941e7108eb5e5c5510a6931ad900d76c6422dd9ade90e49"} Mar 14 09:15:38 crc kubenswrapper[4687]: I0314 09:15:38.046024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" event={"ID":"b608e71f-c0d4-463a-ba9f-a6becc4f54b6","Type":"ContainerStarted","Data":"73ce54c71387fe36a78b1af28851e6506c98ae56136a07575004ee568554ebc5"} Mar 14 09:15:38 crc kubenswrapper[4687]: I0314 09:15:38.046055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:38 crc kubenswrapper[4687]: I0314 09:15:38.074622 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" podStartSLOduration=34.07460368 podStartE2EDuration="34.07460368s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:15:38.072212322 +0000 UTC m=+1123.060452697" watchObservedRunningTime="2026-03-14 09:15:38.07460368 +0000 UTC m=+1123.062844055" Mar 14 09:15:39 crc kubenswrapper[4687]: I0314 09:15:39.055501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" event={"ID":"faf0ec50-97de-40e8-9e7e-c407f08e2de6","Type":"ContainerStarted","Data":"935e31bc2754daba9660318e3c59fed04835addda95f11e807da6bbdc9cfbb6a"} Mar 14 09:15:40 crc kubenswrapper[4687]: I0314 09:15:40.062758 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.386594 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.397139 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-cqznw" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.432508 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" podStartSLOduration=39.825163752 podStartE2EDuration="41.432485614s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:36.588122479 +0000 UTC m=+1121.576362854" lastFinishedPulling="2026-03-14 09:15:38.195444341 +0000 UTC m=+1123.183684716" observedRunningTime="2026-03-14 09:15:39.092774249 +0000 UTC m=+1124.081014624" watchObservedRunningTime="2026-03-14 09:15:44.432485614 +0000 UTC m=+1129.420725999" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.468902 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f84474648-htnfl" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.491018 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-9cbr4" Mar 14 09:15:44 crc kubenswrapper[4687]: I0314 09:15:44.537998 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-lf9b6" Mar 14 09:15:44 crc kubenswrapper[4687]: E0314 09:15:44.738695 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podUID="ace7279e-00c3-42fa-8dfd-ff8f3256c6f0" Mar 14 09:15:44 crc kubenswrapper[4687]: E0314 09:15:44.738909 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/openstack-k8s-operators/watcher-operator:0d9467a022a6d6a8b6465cdbc9e4aefc533e796a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podUID="19eab663-f7c2-4a2a-923c-a5806353c911" Mar 14 09:15:46 crc kubenswrapper[4687]: I0314 09:15:46.107896 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" event={"ID":"9a2eadbd-233b-49a0-b869-de204e01663c","Type":"ContainerStarted","Data":"80be208cef682bb260962a95be3b2078076ef930504c1a922166d318725694b5"} Mar 14 09:15:46 crc kubenswrapper[4687]: I0314 09:15:46.108425 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:15:46 crc kubenswrapper[4687]: I0314 09:15:46.127128 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" podStartSLOduration=18.949233606 podStartE2EDuration="43.127108163s" podCreationTimestamp="2026-03-14 09:15:03 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.000589056 +0000 UTC m=+1105.988829431" lastFinishedPulling="2026-03-14 09:15:45.178463613 +0000 UTC m=+1130.166703988" observedRunningTime="2026-03-14 09:15:46.120791498 +0000 UTC m=+1131.109031863" watchObservedRunningTime="2026-03-14 09:15:46.127108163 +0000 UTC m=+1131.115348558" Mar 14 09:15:46 crc kubenswrapper[4687]: I0314 09:15:46.302612 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4" Mar 14 09:15:46 crc kubenswrapper[4687]: I0314 09:15:46.754279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59cdb7596d-w9jwl" Mar 14 09:15:48 crc kubenswrapper[4687]: E0314 09:15:48.738018 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podUID="50752c5a-72da-4aa5-838e-1b8f7d3ccb04" Mar 14 09:15:50 crc kubenswrapper[4687]: I0314 09:15:50.189009 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-m8hhf" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.138528 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557996-x72zh"] Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.140182 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.142534 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.142548 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.142657 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.146386 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-x72zh"] Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.228966 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqvt\" (UniqueName: \"kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt\") pod \"auto-csr-approver-29557996-x72zh\" (UID: \"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63\") " pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.330300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqvt\" (UniqueName: \"kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt\") pod \"auto-csr-approver-29557996-x72zh\" (UID: \"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63\") " pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.348840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqvt\" (UniqueName: \"kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt\") pod \"auto-csr-approver-29557996-x72zh\" (UID: \"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63\") " pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.487445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:00 crc kubenswrapper[4687]: I0314 09:16:00.922018 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-x72zh"] Mar 14 09:16:00 crc kubenswrapper[4687]: W0314 09:16:00.927820 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod435ba5a2_2b47_42e7_aa8f_0a4ce9cc0d63.slice/crio-57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b WatchSource:0}: Error finding container 57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b: Status 404 returned error can't find the container with id 57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.209434 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" event={"ID":"19eab663-f7c2-4a2a-923c-a5806353c911","Type":"ContainerStarted","Data":"ab1e4f70cf46da95a9b94ece46fcb51fad9c25a04b563699cd2f9d7971c083f0"} Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.209684 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.211654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" event={"ID":"ace7279e-00c3-42fa-8dfd-ff8f3256c6f0","Type":"ContainerStarted","Data":"e04c21204b9fc8828f1502ee81b35e425837e7c1ca666204b6c8f16469a35405"} Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.211817 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.212765 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-x72zh" event={"ID":"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63","Type":"ContainerStarted","Data":"57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b"} Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.228037 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" podStartSLOduration=2.713922448 podStartE2EDuration="57.228020907s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.856264354 +0000 UTC m=+1090.844504729" lastFinishedPulling="2026-03-14 09:16:00.370362803 +0000 UTC m=+1145.358603188" observedRunningTime="2026-03-14 09:16:01.223592429 +0000 UTC m=+1146.211832814" watchObservedRunningTime="2026-03-14 09:16:01.228020907 +0000 UTC m=+1146.216261282" Mar 14 09:16:01 crc kubenswrapper[4687]: I0314 09:16:01.241501 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" podStartSLOduration=2.6819985109999998 podStartE2EDuration="57.241480698s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.843100108 +0000 UTC m=+1090.831340483" lastFinishedPulling="2026-03-14 09:16:00.402582295 +0000 UTC m=+1145.390822670" observedRunningTime="2026-03-14 09:16:01.238848573 +0000 UTC m=+1146.227088948" watchObservedRunningTime="2026-03-14 09:16:01.241480698 +0000 UTC m=+1146.229721073" Mar 14 09:16:12 crc kubenswrapper[4687]: I0314 09:16:12.292107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" event={"ID":"50752c5a-72da-4aa5-838e-1b8f7d3ccb04","Type":"ContainerStarted","Data":"5ff4e91ca4bde7d91656f85f3ffcc3205d3ed3e19d3170a6d6a871a940d03c60"} Mar 14 09:16:12 crc kubenswrapper[4687]: I0314 09:16:12.294169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-x72zh" event={"ID":"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63","Type":"ContainerStarted","Data":"9ae11f6ee760c95a361da8dcd523641a543895d7a1ae68a851436f83e8a0a0e8"} Mar 14 09:16:12 crc kubenswrapper[4687]: I0314 09:16:12.309869 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j7pgq" podStartSLOduration=2.197917995 podStartE2EDuration="1m8.309851357s" podCreationTimestamp="2026-03-14 09:15:04 +0000 UTC" firstStartedPulling="2026-03-14 09:15:05.876562486 +0000 UTC m=+1090.864802861" lastFinishedPulling="2026-03-14 09:16:11.988495848 +0000 UTC m=+1156.976736223" observedRunningTime="2026-03-14 09:16:12.308196467 +0000 UTC m=+1157.296436842" watchObservedRunningTime="2026-03-14 09:16:12.309851357 +0000 UTC m=+1157.298091732" Mar 14 09:16:12 crc kubenswrapper[4687]: I0314 09:16:12.324102 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557996-x72zh" podStartSLOduration=1.269714893 podStartE2EDuration="12.324086568s" podCreationTimestamp="2026-03-14 09:16:00 +0000 UTC" firstStartedPulling="2026-03-14 09:16:00.930318399 +0000 UTC m=+1145.918558774" lastFinishedPulling="2026-03-14 09:16:11.984690074 +0000 UTC m=+1156.972930449" observedRunningTime="2026-03-14 09:16:12.321640038 +0000 UTC m=+1157.309880423" watchObservedRunningTime="2026-03-14 09:16:12.324086568 +0000 UTC m=+1157.312326943" Mar 14 09:16:13 crc kubenswrapper[4687]: I0314 09:16:13.301777 4687 generic.go:334] "Generic (PLEG): container finished" podID="435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" containerID="9ae11f6ee760c95a361da8dcd523641a543895d7a1ae68a851436f83e8a0a0e8" exitCode=0 Mar 14 09:16:13 crc kubenswrapper[4687]: I0314 09:16:13.301827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-x72zh" event={"ID":"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63","Type":"ContainerDied","Data":"9ae11f6ee760c95a361da8dcd523641a543895d7a1ae68a851436f83e8a0a0e8"} Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.652029 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.686446 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bwzgn" Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.759520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7b8d757b5d-r8fgz" Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.856732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqvt\" (UniqueName: \"kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt\") pod \"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63\" (UID: \"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63\") " Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.865558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt" (OuterVolumeSpecName: "kube-api-access-4dqvt") pod "435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" (UID: "435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63"). InnerVolumeSpecName "kube-api-access-4dqvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:16:14 crc kubenswrapper[4687]: I0314 09:16:14.957877 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqvt\" (UniqueName: \"kubernetes.io/projected/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63-kube-api-access-4dqvt\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.324049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-x72zh" event={"ID":"435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63","Type":"ContainerDied","Data":"57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b"} Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.324285 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57742517b48e6e9154bfc53c0f55dc7d807b301738a5ca6739cc76bf6a3e501b" Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.324246 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-x72zh" Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.717680 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-wdnmr"] Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.724734 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-wdnmr"] Mar 14 09:16:15 crc kubenswrapper[4687]: I0314 09:16:15.763775 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fda40ca-f725-406d-b1d7-60cbb5b3f386" path="/var/lib/kubelet/pods/0fda40ca-f725-406d-b1d7-60cbb5b3f386/volumes" Mar 14 09:16:24 crc kubenswrapper[4687]: I0314 09:16:24.110866 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:16:24 crc kubenswrapper[4687]: I0314 09:16:24.111475 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.123381 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:32 crc kubenswrapper[4687]: E0314 09:16:32.124764 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" containerName="oc" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.124784 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" containerName="oc" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.124975 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" containerName="oc" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.125955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.132997 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.133089 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.133545 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jp9kj" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.136658 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.140047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.201284 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.202474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.204891 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.223252 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.300484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9hh\" (UniqueName: \"kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.300800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.402314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9hh\" (UniqueName: \"kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.402416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.402463 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.402493 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w889n\" (UniqueName: \"kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.402545 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.403370 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.421044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9hh\" (UniqueName: \"kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh\") pod \"dnsmasq-dns-9df779c77-h7q88\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.443271 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.503262 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.503395 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.503430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w889n\" (UniqueName: \"kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.504278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.504406 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.527130 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w889n\" (UniqueName: \"kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n\") pod \"dnsmasq-dns-f89c97c9c-7lvdz\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.814730 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.891957 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:32 crc kubenswrapper[4687]: W0314 09:16:32.894373 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca9e4ff5_2259_42af_9689_98086e76e634.slice/crio-bf2a4f6482709064fe329207ef6cb55912ee5dd49972d12e304962b61b9b14e9 WatchSource:0}: Error finding container bf2a4f6482709064fe329207ef6cb55912ee5dd49972d12e304962b61b9b14e9: Status 404 returned error can't find the container with id bf2a4f6482709064fe329207ef6cb55912ee5dd49972d12e304962b61b9b14e9 Mar 14 09:16:32 crc kubenswrapper[4687]: I0314 09:16:32.898136 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:16:33 crc kubenswrapper[4687]: I0314 09:16:33.271129 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:33 crc kubenswrapper[4687]: W0314 09:16:33.281198 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b5fd3e_bdfb_43c2_b4ad_9ca75c893bb7.slice/crio-bcd4c8639bdeabd16a5b79961c88298de31c2c1accc2ecd936015be2876e895e WatchSource:0}: Error finding container bcd4c8639bdeabd16a5b79961c88298de31c2c1accc2ecd936015be2876e895e: Status 404 returned error can't find the container with id bcd4c8639bdeabd16a5b79961c88298de31c2c1accc2ecd936015be2876e895e Mar 14 09:16:33 crc kubenswrapper[4687]: I0314 09:16:33.474255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" event={"ID":"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7","Type":"ContainerStarted","Data":"bcd4c8639bdeabd16a5b79961c88298de31c2c1accc2ecd936015be2876e895e"} Mar 14 09:16:33 crc kubenswrapper[4687]: I0314 09:16:33.475855 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9df779c77-h7q88" event={"ID":"ca9e4ff5-2259-42af-9689-98086e76e634","Type":"ContainerStarted","Data":"bf2a4f6482709064fe329207ef6cb55912ee5dd49972d12e304962b61b9b14e9"} Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.815119 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.827351 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.829727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.834492 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.950617 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dbn\" (UniqueName: \"kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.950670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:35 crc kubenswrapper[4687]: I0314 09:16:35.950706 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.053027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7dbn\" (UniqueName: \"kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.053084 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.053128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.054379 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.054949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.097388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7dbn\" (UniqueName: \"kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn\") pod \"dnsmasq-dns-85b5d9c497-79tqq\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.156825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.176294 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.193505 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.194657 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.205639 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.358163 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.358525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.358550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mx74\" (UniqueName: \"kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.452675 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.461411 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.461458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mx74\" (UniqueName: \"kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.461537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.462323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.462623 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.472855 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.474000 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.498059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mx74\" (UniqueName: \"kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74\") pod \"dnsmasq-dns-7c96c45f57-wkhq6\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.498605 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.516526 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.563218 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.563289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.563361 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqk9r\" (UniqueName: \"kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.664160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqk9r\" (UniqueName: \"kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.664237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.664283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.665082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.667519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.680425 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqk9r\" (UniqueName: \"kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r\") pod \"dnsmasq-dns-64944fc74f-h72lx\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:36 crc kubenswrapper[4687]: I0314 09:16:36.841648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.038430 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.039678 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.045688 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.045883 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.047200 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.047737 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ls7br" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.048033 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.048239 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.050664 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.057695 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171223 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6785aec9-5237-4c55-9ec3-1d8783495b3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zsdb\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-kube-api-access-9zsdb\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6785aec9-5237-4c55-9ec3-1d8783495b3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171444 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171468 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.171484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273109 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273167 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273201 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273312 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6785aec9-5237-4c55-9ec3-1d8783495b3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273350 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zsdb\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-kube-api-access-9zsdb\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273386 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273413 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6785aec9-5237-4c55-9ec3-1d8783495b3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273499 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.273853 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.274670 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.277890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.278216 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.279701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.280858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6785aec9-5237-4c55-9ec3-1d8783495b3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.282022 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6785aec9-5237-4c55-9ec3-1d8783495b3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.282280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.282896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.284825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6785aec9-5237-4c55-9ec3-1d8783495b3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.294121 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zsdb\" (UniqueName: \"kubernetes.io/projected/6785aec9-5237-4c55-9ec3-1d8783495b3a-kube-api-access-9zsdb\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.311668 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"6785aec9-5237-4c55-9ec3-1d8783495b3a\") " pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.316605 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.318054 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.323171 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.324319 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.327663 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-snz66" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.331525 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.331761 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.331969 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.332121 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.343293 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.373098 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.475793 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.475915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc54e55-6120-453d-8955-b7f478318618-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.475978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476417 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476460 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxcz\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-kube-api-access-dsxcz\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.476703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc54e55-6120-453d-8955-b7f478318618-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.578657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.579130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.579158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.579618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.579796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxcz\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-kube-api-access-dsxcz\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.580115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.580146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.581115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.581407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc54e55-6120-453d-8955-b7f478318618-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.581429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.581262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.580390 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.580237 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.582018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc54e55-6120-453d-8955-b7f478318618-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.582054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.582074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc54e55-6120-453d-8955-b7f478318618-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.583606 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.584669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc54e55-6120-453d-8955-b7f478318618-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.584843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.586258 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.588739 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc54e55-6120-453d-8955-b7f478318618-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.596287 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxcz\" (UniqueName: \"kubernetes.io/projected/4bc54e55-6120-453d-8955-b7f478318618-kube-api-access-dsxcz\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.614255 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4bc54e55-6120-453d-8955-b7f478318618\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.621881 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.623380 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626292 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626519 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626606 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626439 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626695 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-8k8rd" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.626963 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.638720 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.668179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5w2l\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-kube-api-access-v5w2l\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83405408-9b65-42fd-955c-952cad220093-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83405408-9b65-42fd-955c-952cad220093-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785797 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.785820 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.886914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5w2l\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-kube-api-access-v5w2l\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.886977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83405408-9b65-42fd-955c-952cad220093-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887059 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83405408-9b65-42fd-955c-952cad220093-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.887295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.888257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.889170 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.889364 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.892761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83405408-9b65-42fd-955c-952cad220093-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.893656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.893978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83405408-9b65-42fd-955c-952cad220093-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.896917 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.897837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83405408-9b65-42fd-955c-952cad220093-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.899893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.917050 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83405408-9b65-42fd-955c-952cad220093-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.922291 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:37 crc kubenswrapper[4687]: I0314 09:16:37.925097 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5w2l\" (UniqueName: \"kubernetes.io/projected/83405408-9b65-42fd-955c-952cad220093-kube-api-access-v5w2l\") pod \"notifications-rabbitmq-server-0\" (UID: \"83405408-9b65-42fd-955c-952cad220093\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:38 crc kubenswrapper[4687]: I0314 09:16:38.006671 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.148968 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.150518 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.158089 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.158499 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.158634 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-b9qv6" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.160718 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.165844 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.166733 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310181 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310295 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqwbq\" (UniqueName: \"kubernetes.io/projected/cd37e7dc-9797-42c5-865f-832412233c32-kube-api-access-kqwbq\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-config-data-default\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-kolla-config\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.310485 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd37e7dc-9797-42c5-865f-832412233c32-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.411662 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd37e7dc-9797-42c5-865f-832412233c32-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412165 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqwbq\" (UniqueName: \"kubernetes.io/projected/cd37e7dc-9797-42c5-865f-832412233c32-kube-api-access-kqwbq\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-config-data-default\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412288 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-kolla-config\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.412320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.415402 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.417272 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-kolla-config\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.417428 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-config-data-default\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.417557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd37e7dc-9797-42c5-865f-832412233c32-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.417844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd37e7dc-9797-42c5-865f-832412233c32-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.419484 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.420893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd37e7dc-9797-42c5-865f-832412233c32-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.451322 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.455551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqwbq\" (UniqueName: \"kubernetes.io/projected/cd37e7dc-9797-42c5-865f-832412233c32-kube-api-access-kqwbq\") pod \"openstack-galera-0\" (UID: \"cd37e7dc-9797-42c5-865f-832412233c32\") " pod="openstack/openstack-galera-0" Mar 14 09:16:39 crc kubenswrapper[4687]: I0314 09:16:39.516820 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.422739 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.424071 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.425816 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kg9j6" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.425974 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.427189 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.430405 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.434539 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529163 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529216 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce12a318-4d43-442e-9621-690da5f189eb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529366 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529386 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.529402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t7x\" (UniqueName: \"kubernetes.io/projected/ce12a318-4d43-442e-9621-690da5f189eb-kube-api-access-g2t7x\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t7x\" (UniqueName: \"kubernetes.io/projected/ce12a318-4d43-442e-9621-690da5f189eb-kube-api-access-g2t7x\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630363 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630456 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce12a318-4d43-442e-9621-690da5f189eb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.630631 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.631017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce12a318-4d43-442e-9621-690da5f189eb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.631174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.631766 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.632012 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce12a318-4d43-442e-9621-690da5f189eb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.644934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.645703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce12a318-4d43-442e-9621-690da5f189eb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.665209 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t7x\" (UniqueName: \"kubernetes.io/projected/ce12a318-4d43-442e-9621-690da5f189eb-kube-api-access-g2t7x\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.674450 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce12a318-4d43-442e-9621-690da5f189eb\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.718027 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.719395 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.722204 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.722204 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6dxnp" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.724397 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.734940 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.795697 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.832205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-config-data\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.832294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.832312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.832377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-kolla-config\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.832433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpb5n\" (UniqueName: \"kubernetes.io/projected/59923589-5501-43dc-af74-eb7006f6c427-kube-api-access-wpb5n\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.934152 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-config-data\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.934237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.934258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.934290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-kolla-config\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.934331 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpb5n\" (UniqueName: \"kubernetes.io/projected/59923589-5501-43dc-af74-eb7006f6c427-kube-api-access-wpb5n\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.935103 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-kolla-config\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.935272 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59923589-5501-43dc-af74-eb7006f6c427-config-data\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.937509 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.943136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59923589-5501-43dc-af74-eb7006f6c427-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:40 crc kubenswrapper[4687]: I0314 09:16:40.949921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpb5n\" (UniqueName: \"kubernetes.io/projected/59923589-5501-43dc-af74-eb7006f6c427-kube-api-access-wpb5n\") pod \"memcached-0\" (UID: \"59923589-5501-43dc-af74-eb7006f6c427\") " pod="openstack/memcached-0" Mar 14 09:16:41 crc kubenswrapper[4687]: I0314 09:16:41.041802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.021629 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.023119 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.030486 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mx9bh" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.065300 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.066114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qt8p\" (UniqueName: \"kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p\") pod \"kube-state-metrics-0\" (UID: \"83b6dcda-2598-425a-9ec3-3ca523f94052\") " pod="openstack/kube-state-metrics-0" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.170067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qt8p\" (UniqueName: \"kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p\") pod \"kube-state-metrics-0\" (UID: \"83b6dcda-2598-425a-9ec3-3ca523f94052\") " pod="openstack/kube-state-metrics-0" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.217189 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qt8p\" (UniqueName: \"kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p\") pod \"kube-state-metrics-0\" (UID: \"83b6dcda-2598-425a-9ec3-3ca523f94052\") " pod="openstack/kube-state-metrics-0" Mar 14 09:16:43 crc kubenswrapper[4687]: I0314 09:16:43.358984 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.183422 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.186003 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.188404 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.192830 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.192908 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.192957 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.193500 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-s2k8k" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.193651 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.194113 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.197903 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.199697 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286346 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286975 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.286996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.287023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgl5n\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.387923 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.387968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388029 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgl5n\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388180 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388220 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388266 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.388364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.389058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.389059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.390030 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.393771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.394006 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.394529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.394734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.397088 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.397210 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.397248 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e883ad2e85d4265753417decd9704d55cec1792fde830c419ee7ac911f8f2bd8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.417097 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgl5n\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.427701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:44 crc kubenswrapper[4687]: I0314 09:16:44.506326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.728489 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s5czb"] Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.733051 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.735014 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.737478 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.737669 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dbxxf" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.787558 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sfclc"] Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.792664 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5czb"] Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.792764 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.809862 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sfclc"] Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829149 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829225 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtrk\" (UniqueName: \"kubernetes.io/projected/5f410ca3-8151-42b5-9250-837b9444eb7e-kube-api-access-vjtrk\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f410ca3-8151-42b5-9250-837b9444eb7e-scripts\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-combined-ca-bundle\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829389 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-ovn-controller-tls-certs\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.829411 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-log-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.930772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-lib\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-run\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-etc-ovs\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931118 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w68d8\" (UniqueName: \"kubernetes.io/projected/af0f78a2-052f-428c-8b71-425a477a00bd-kube-api-access-w68d8\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931150 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0f78a2-052f-428c-8b71-425a477a00bd-scripts\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-combined-ca-bundle\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931204 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-log\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-ovn-controller-tls-certs\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-log-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtrk\" (UniqueName: \"kubernetes.io/projected/5f410ca3-8151-42b5-9250-837b9444eb7e-kube-api-access-vjtrk\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931346 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.931367 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f410ca3-8151-42b5-9250-837b9444eb7e-scripts\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.933255 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f410ca3-8151-42b5-9250-837b9444eb7e-scripts\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.933765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-log-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.933867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.934013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f410ca3-8151-42b5-9250-837b9444eb7e-var-run-ovn\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.958974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-combined-ca-bundle\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.963512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f410ca3-8151-42b5-9250-837b9444eb7e-ovn-controller-tls-certs\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:45 crc kubenswrapper[4687]: I0314 09:16:45.963694 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtrk\" (UniqueName: \"kubernetes.io/projected/5f410ca3-8151-42b5-9250-837b9444eb7e-kube-api-access-vjtrk\") pod \"ovn-controller-s5czb\" (UID: \"5f410ca3-8151-42b5-9250-837b9444eb7e\") " pod="openstack/ovn-controller-s5czb" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-lib\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-run\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-etc-ovs\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w68d8\" (UniqueName: \"kubernetes.io/projected/af0f78a2-052f-428c-8b71-425a477a00bd-kube-api-access-w68d8\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0f78a2-052f-428c-8b71-425a477a00bd-scripts\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.032990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-log\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.033286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-lib\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.033355 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-run\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.033462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-etc-ovs\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.035300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0f78a2-052f-428c-8b71-425a477a00bd-scripts\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.035407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0f78a2-052f-428c-8b71-425a477a00bd-var-log\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.062874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w68d8\" (UniqueName: \"kubernetes.io/projected/af0f78a2-052f-428c-8b71-425a477a00bd-kube-api-access-w68d8\") pod \"ovn-controller-ovs-sfclc\" (UID: \"af0f78a2-052f-428c-8b71-425a477a00bd\") " pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.084719 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.117913 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.632382 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.635707 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.639479 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.639662 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.639843 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mfbv9" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.639981 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.640136 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.642262 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.744968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745028 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-config\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjpz\" (UniqueName: \"kubernetes.io/projected/79f508ad-6f43-451b-b4f2-2250754b5b1c-kube-api-access-rjjpz\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.745796 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847129 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-config\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjpz\" (UniqueName: \"kubernetes.io/projected/79f508ad-6f43-451b-b4f2-2250754b5b1c-kube-api-access-rjjpz\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847383 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847686 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.847964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.849074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.850633 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f508ad-6f43-451b-b4f2-2250754b5b1c-config\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.850950 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.852172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.863815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjpz\" (UniqueName: \"kubernetes.io/projected/79f508ad-6f43-451b-b4f2-2250754b5b1c-kube-api-access-rjjpz\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.870106 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f508ad-6f43-451b-b4f2-2250754b5b1c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.884729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79f508ad-6f43-451b-b4f2-2250754b5b1c\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:46 crc kubenswrapper[4687]: I0314 09:16:46.955673 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.268289 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.270083 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.272098 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.272435 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.272791 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.275145 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xzfn7" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.282484 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckmd\" (UniqueName: \"kubernetes.io/projected/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-kube-api-access-kckmd\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405537 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.405955 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.406003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.406052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508437 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kckmd\" (UniqueName: \"kubernetes.io/projected/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-kube-api-access-kckmd\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508578 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508642 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.508753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.509198 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.509868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.510937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.511160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.516643 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.517897 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.519135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.538753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckmd\" (UniqueName: \"kubernetes.io/projected/95b6741a-6d2f-45c7-81a1-4e254e9e23f8-kube-api-access-kckmd\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.545395 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"95b6741a-6d2f-45c7-81a1-4e254e9e23f8\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:50 crc kubenswrapper[4687]: I0314 09:16:50.588920 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.286589 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.286865 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.287049 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp9hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-9df779c77-h7q88_openstack(ca9e4ff5-2259-42af-9689-98086e76e634): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.288357 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-9df779c77-h7q88" podUID="ca9e4ff5-2259-42af-9689-98086e76e634" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.328451 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.328507 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.328618 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.243:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w889n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f89c97c9c-7lvdz_openstack(29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:16:52 crc kubenswrapper[4687]: E0314 09:16:52.329821 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" podUID="29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7" Mar 14 09:16:52 crc kubenswrapper[4687]: I0314 09:16:52.776207 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.332762 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: W0314 09:16:53.343140 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce12a318_4d43_442e_9621_690da5f189eb.slice/crio-cb2b15169d338f2a1d454331958c7a3a3163c1fcc872f92dcb8fd626c1c7c9de WatchSource:0}: Error finding container cb2b15169d338f2a1d454331958c7a3a3163c1fcc872f92dcb8fd626c1c7c9de: Status 404 returned error can't find the container with id cb2b15169d338f2a1d454331958c7a3a3163c1fcc872f92dcb8fd626c1c7c9de Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.387668 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.410907 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.465087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w889n\" (UniqueName: \"kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n\") pod \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.465169 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp9hh\" (UniqueName: \"kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh\") pod \"ca9e4ff5-2259-42af-9689-98086e76e634\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.465270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc\") pod \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.465323 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config\") pod \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\" (UID: \"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7\") " Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.465370 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config\") pod \"ca9e4ff5-2259-42af-9689-98086e76e634\" (UID: \"ca9e4ff5-2259-42af-9689-98086e76e634\") " Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.466179 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7" (UID: "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.466279 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config" (OuterVolumeSpecName: "config") pod "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7" (UID: "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.466547 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config" (OuterVolumeSpecName: "config") pod "ca9e4ff5-2259-42af-9689-98086e76e634" (UID: "ca9e4ff5-2259-42af-9689-98086e76e634"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.471017 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh" (OuterVolumeSpecName: "kube-api-access-lp9hh") pod "ca9e4ff5-2259-42af-9689-98086e76e634" (UID: "ca9e4ff5-2259-42af-9689-98086e76e634"). InnerVolumeSpecName "kube-api-access-lp9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.471259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n" (OuterVolumeSpecName: "kube-api-access-w889n") pod "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7" (UID: "29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7"). InnerVolumeSpecName "kube-api-access-w889n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.567357 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w889n\" (UniqueName: \"kubernetes.io/projected/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-kube-api-access-w889n\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.567395 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp9hh\" (UniqueName: \"kubernetes.io/projected/ca9e4ff5-2259-42af-9689-98086e76e634-kube-api-access-lp9hh\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.567404 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.567413 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.567421 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9e4ff5-2259-42af-9689-98086e76e634-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.635943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9df779c77-h7q88" event={"ID":"ca9e4ff5-2259-42af-9689-98086e76e634","Type":"ContainerDied","Data":"bf2a4f6482709064fe329207ef6cb55912ee5dd49972d12e304962b61b9b14e9"} Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.636027 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9df779c77-h7q88" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.650873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6785aec9-5237-4c55-9ec3-1d8783495b3a","Type":"ContainerStarted","Data":"090d26620ef7c7a1bd57671498474e0a83b4d7237ddba5ee64f0deb5ba5cbaac"} Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.656097 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" event={"ID":"29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7","Type":"ContainerDied","Data":"bcd4c8639bdeabd16a5b79961c88298de31c2c1accc2ecd936015be2876e895e"} Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.656177 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f89c97c9c-7lvdz" Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.667716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce12a318-4d43-442e-9621-690da5f189eb","Type":"ContainerStarted","Data":"cb2b15169d338f2a1d454331958c7a3a3163c1fcc872f92dcb8fd626c1c7c9de"} Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.749126 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.761702 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9df779c77-h7q88"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.773469 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.787112 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f89c97c9c-7lvdz"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.895574 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.915101 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.930014 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.957014 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.973025 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.980776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:16:53 crc kubenswrapper[4687]: I0314 09:16:53.993797 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.004989 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.020787 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.030076 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5czb"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.036311 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.088306 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sfclc"] Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.111182 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.111233 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:16:54 crc kubenswrapper[4687]: I0314 09:16:54.645027 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.366257 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59923589_5501_43dc_af74_eb7006f6c427.slice/crio-4b14c734acc65274c14d1bf1bae6ec54408747f5581e668979ca7e2c3e40ff3d WatchSource:0}: Error finding container 4b14c734acc65274c14d1bf1bae6ec54408747f5581e668979ca7e2c3e40ff3d: Status 404 returned error can't find the container with id 4b14c734acc65274c14d1bf1bae6ec54408747f5581e668979ca7e2c3e40ff3d Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.371517 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92acaca1_342f_4033_9247_07768c100649.slice/crio-dccac1b6b8492488a5a46e2e149e31e05ae30a05f82f45d3738c97f64981a73a WatchSource:0}: Error finding container dccac1b6b8492488a5a46e2e149e31e05ae30a05f82f45d3738c97f64981a73a: Status 404 returned error can't find the container with id dccac1b6b8492488a5a46e2e149e31e05ae30a05f82f45d3738c97f64981a73a Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.374249 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83b6dcda_2598_425a_9ec3_3ca523f94052.slice/crio-4fb0a6aacc46742454d2fad563d89aebd065747a4ec44ac026725e7025e3011e WatchSource:0}: Error finding container 4fb0a6aacc46742454d2fad563d89aebd065747a4ec44ac026725e7025e3011e: Status 404 returned error can't find the container with id 4fb0a6aacc46742454d2fad563d89aebd065747a4ec44ac026725e7025e3011e Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.376177 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a029a97_a503_4a10_a68c_8b145b6fc798.slice/crio-a7f1bc7aaed273b9c3a6f71499830396b5e1ca9751f790fe364133510d71448a WatchSource:0}: Error finding container a7f1bc7aaed273b9c3a6f71499830396b5e1ca9751f790fe364133510d71448a: Status 404 returned error can't find the container with id a7f1bc7aaed273b9c3a6f71499830396b5e1ca9751f790fe364133510d71448a Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.388765 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83405408_9b65_42fd_955c_952cad220093.slice/crio-49271646310a8621f8e86babc500b20fefe263a64bc95c023b7769b2d2c0717a WatchSource:0}: Error finding container 49271646310a8621f8e86babc500b20fefe263a64bc95c023b7769b2d2c0717a: Status 404 returned error can't find the container with id 49271646310a8621f8e86babc500b20fefe263a64bc95c023b7769b2d2c0717a Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.390475 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b6741a_6d2f_45c7_81a1_4e254e9e23f8.slice/crio-863c1a34a9e50e3bd0a221f004a600b56ec311d1e35d1a99245fabc09801c34e WatchSource:0}: Error finding container 863c1a34a9e50e3bd0a221f004a600b56ec311d1e35d1a99245fabc09801c34e: Status 404 returned error can't find the container with id 863c1a34a9e50e3bd0a221f004a600b56ec311d1e35d1a99245fabc09801c34e Mar 14 09:16:55 crc kubenswrapper[4687]: W0314 09:16:55.392805 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc54e55_6120_453d_8955_b7f478318618.slice/crio-551f570487d9035a87fe1d0ba1ff91b4f1dd5c7b138e224c2f9a4052bc822abc WatchSource:0}: Error finding container 551f570487d9035a87fe1d0ba1ff91b4f1dd5c7b138e224c2f9a4052bc822abc: Status 404 returned error can't find the container with id 551f570487d9035a87fe1d0ba1ff91b4f1dd5c7b138e224c2f9a4052bc822abc Mar 14 09:16:55 crc kubenswrapper[4687]: E0314 09:16:55.396038 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.243:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9chd4h668h4h5chcch645h58dh56dh678h599h564h64bhbfh5c9h675h8fh65bh84h64dh574h5ffh574h667h5d9h658h58fh5b5h684h57fh5dh646q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kckmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(95b6741a-6d2f-45c7-81a1-4e254e9e23f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:16:55 crc kubenswrapper[4687]: E0314 09:16:55.397867 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n9chd4h668h4h5chcch645h58dh56dh678h599h564h64bhbfh5c9h675h8fh65bh84h64dh574h5ffh574h667h5d9h658h58fh5b5h684h57fh5dh646q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kckmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(95b6741a-6d2f-45c7-81a1-4e254e9e23f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:16:55 crc kubenswrapper[4687]: E0314 09:16:55.399044 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-sb-0" podUID="95b6741a-6d2f-45c7-81a1-4e254e9e23f8" Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.683589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerStarted","Data":"dccac1b6b8492488a5a46e2e149e31e05ae30a05f82f45d3738c97f64981a73a"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.685488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79f508ad-6f43-451b-b4f2-2250754b5b1c","Type":"ContainerStarted","Data":"95849986cce86d0d4109883f4b66138bbbbed61573281ff1e1a468e51d948b3f"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.686664 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" event={"ID":"3a029a97-a503-4a10-a68c-8b145b6fc798","Type":"ContainerStarted","Data":"a7f1bc7aaed273b9c3a6f71499830396b5e1ca9751f790fe364133510d71448a"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.688805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"83405408-9b65-42fd-955c-952cad220093","Type":"ContainerStarted","Data":"49271646310a8621f8e86babc500b20fefe263a64bc95c023b7769b2d2c0717a"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.690804 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83b6dcda-2598-425a-9ec3-3ca523f94052","Type":"ContainerStarted","Data":"4fb0a6aacc46742454d2fad563d89aebd065747a4ec44ac026725e7025e3011e"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.691970 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cd37e7dc-9797-42c5-865f-832412233c32","Type":"ContainerStarted","Data":"e02c0aeb184985dba91da42eb420c2420c422cc49cf37e14cbf917e5fa702800"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.692827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4bc54e55-6120-453d-8955-b7f478318618","Type":"ContainerStarted","Data":"551f570487d9035a87fe1d0ba1ff91b4f1dd5c7b138e224c2f9a4052bc822abc"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.693908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59923589-5501-43dc-af74-eb7006f6c427","Type":"ContainerStarted","Data":"4b14c734acc65274c14d1bf1bae6ec54408747f5581e668979ca7e2c3e40ff3d"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.694939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5czb" event={"ID":"5f410ca3-8151-42b5-9250-837b9444eb7e","Type":"ContainerStarted","Data":"590e1e6bf2b2701c3c88b86209ed0bd7f94235235c87d036bc0581791c4b985e"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.695965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"95b6741a-6d2f-45c7-81a1-4e254e9e23f8","Type":"ContainerStarted","Data":"863c1a34a9e50e3bd0a221f004a600b56ec311d1e35d1a99245fabc09801c34e"} Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.697093 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" event={"ID":"d7227f96-5855-4699-bc6b-b36309541fb9","Type":"ContainerStarted","Data":"9f781556e966fc3228f3381a20a376e8535b0ddd99c1062444a093d5e023aa73"} Mar 14 09:16:55 crc kubenswrapper[4687]: E0314 09:16:55.698052 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="95b6741a-6d2f-45c7-81a1-4e254e9e23f8" Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.762864 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7" path="/var/lib/kubelet/pods/29b5fd3e-bdfb-43c2-b4ad-9ca75c893bb7/volumes" Mar 14 09:16:55 crc kubenswrapper[4687]: I0314 09:16:55.763296 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9e4ff5-2259-42af-9689-98086e76e634" path="/var/lib/kubelet/pods/ca9e4ff5-2259-42af-9689-98086e76e634/volumes" Mar 14 09:16:56 crc kubenswrapper[4687]: E0314 09:16:56.720910 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="95b6741a-6d2f-45c7-81a1-4e254e9e23f8" Mar 14 09:16:57 crc kubenswrapper[4687]: W0314 09:16:57.779187 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8ccc418_2d3f_4d06_a65d_b3eb590967c4.slice/crio-b802051038d606426b1d37414824a509bff55de207394331713a231dafbf1eec WatchSource:0}: Error finding container b802051038d606426b1d37414824a509bff55de207394331713a231dafbf1eec: Status 404 returned error can't find the container with id b802051038d606426b1d37414824a509bff55de207394331713a231dafbf1eec Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.207657 4687 scope.go:117] "RemoveContainer" containerID="42fcb2f1c4ce84038d17c8741e13c4260d45690fc465f94cb6644e4c178f851b" Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.735546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cd37e7dc-9797-42c5-865f-832412233c32","Type":"ContainerStarted","Data":"cdcf48b2ddcdcb93952109e898910ddd6fa31e1bab0db841b5ff39ca3fe35958"} Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.738562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sfclc" event={"ID":"af0f78a2-052f-428c-8b71-425a477a00bd","Type":"ContainerStarted","Data":"864d3c16df9b116f7ad06527ad77916bd1f959a870e0d5bb0a10bd49a038f684"} Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.740958 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce12a318-4d43-442e-9621-690da5f189eb","Type":"ContainerStarted","Data":"b17fc9752eeb24561aa3a6af7a7ffe43b16a653e91c06c1042607f0e0fb0daf7"} Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.745830 4687 generic.go:334] "Generic (PLEG): container finished" podID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerID="59c5d2fe642ca902c606410161652522b0c336d94e73eae35484213377ac9fba" exitCode=0 Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.745871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" event={"ID":"d8ccc418-2d3f-4d06-a65d-b3eb590967c4","Type":"ContainerDied","Data":"59c5d2fe642ca902c606410161652522b0c336d94e73eae35484213377ac9fba"} Mar 14 09:16:58 crc kubenswrapper[4687]: I0314 09:16:58.745892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" event={"ID":"d8ccc418-2d3f-4d06-a65d-b3eb590967c4","Type":"ContainerStarted","Data":"b802051038d606426b1d37414824a509bff55de207394331713a231dafbf1eec"} Mar 14 09:16:59 crc kubenswrapper[4687]: I0314 09:16:59.758739 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6785aec9-5237-4c55-9ec3-1d8783495b3a","Type":"ContainerStarted","Data":"885ae97c5814a24669102bfdff3107133f16780644f8c61b5722176415ea815a"} Mar 14 09:17:04 crc kubenswrapper[4687]: I0314 09:17:04.794978 4687 generic.go:334] "Generic (PLEG): container finished" podID="d7227f96-5855-4699-bc6b-b36309541fb9" containerID="2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d" exitCode=0 Mar 14 09:17:04 crc kubenswrapper[4687]: I0314 09:17:04.795056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" event={"ID":"d7227f96-5855-4699-bc6b-b36309541fb9","Type":"ContainerDied","Data":"2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d"} Mar 14 09:17:04 crc kubenswrapper[4687]: I0314 09:17:04.798040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"83405408-9b65-42fd-955c-952cad220093","Type":"ContainerStarted","Data":"a35d7a7367010c6ee5fceb6e6f64753a6d08882f88aa7775cc2a9bfc042f548f"} Mar 14 09:17:04 crc kubenswrapper[4687]: I0314 09:17:04.799740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4bc54e55-6120-453d-8955-b7f478318618","Type":"ContainerStarted","Data":"15d97e47b4e0ab124300f2e7a4177f49aeb6270c7f2b72a1b6c54a1c56017025"} Mar 14 09:17:08 crc kubenswrapper[4687]: I0314 09:17:08.865665 4687 generic.go:334] "Generic (PLEG): container finished" podID="3a029a97-a503-4a10-a68c-8b145b6fc798" containerID="c7bccdb30910129b58650701e0cf8af8325605f0e35ac9148d64b9276e22cee9" exitCode=0 Mar 14 09:17:08 crc kubenswrapper[4687]: I0314 09:17:08.865731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" event={"ID":"3a029a97-a503-4a10-a68c-8b145b6fc798","Type":"ContainerDied","Data":"c7bccdb30910129b58650701e0cf8af8325605f0e35ac9148d64b9276e22cee9"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.215578 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.347612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config\") pod \"3a029a97-a503-4a10-a68c-8b145b6fc798\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.347662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc\") pod \"3a029a97-a503-4a10-a68c-8b145b6fc798\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.347707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7dbn\" (UniqueName: \"kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn\") pod \"3a029a97-a503-4a10-a68c-8b145b6fc798\" (UID: \"3a029a97-a503-4a10-a68c-8b145b6fc798\") " Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.353605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn" (OuterVolumeSpecName: "kube-api-access-b7dbn") pod "3a029a97-a503-4a10-a68c-8b145b6fc798" (UID: "3a029a97-a503-4a10-a68c-8b145b6fc798"). InnerVolumeSpecName "kube-api-access-b7dbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.449486 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7dbn\" (UniqueName: \"kubernetes.io/projected/3a029a97-a503-4a10-a68c-8b145b6fc798-kube-api-access-b7dbn\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.474849 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a029a97-a503-4a10-a68c-8b145b6fc798" (UID: "3a029a97-a503-4a10-a68c-8b145b6fc798"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.477116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config" (OuterVolumeSpecName: "config") pod "3a029a97-a503-4a10-a68c-8b145b6fc798" (UID: "3a029a97-a503-4a10-a68c-8b145b6fc798"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.551432 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.551462 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a029a97-a503-4a10-a68c-8b145b6fc798-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.874091 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" event={"ID":"3a029a97-a503-4a10-a68c-8b145b6fc798","Type":"ContainerDied","Data":"a7f1bc7aaed273b9c3a6f71499830396b5e1ca9751f790fe364133510d71448a"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.874518 4687 scope.go:117] "RemoveContainer" containerID="c7bccdb30910129b58650701e0cf8af8325605f0e35ac9148d64b9276e22cee9" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.874100 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b5d9c497-79tqq" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.876566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sfclc" event={"ID":"af0f78a2-052f-428c-8b71-425a477a00bd","Type":"ContainerStarted","Data":"a1ebe98bee9829456cee7cb8dd80f30f9b8d456f08b650cf3e1fcc5e1d740d71"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.879584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83b6dcda-2598-425a-9ec3-3ca523f94052","Type":"ContainerStarted","Data":"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.879975 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.882193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" event={"ID":"d8ccc418-2d3f-4d06-a65d-b3eb590967c4","Type":"ContainerStarted","Data":"a41ac4a49134afb05d24a5959e88f1f9bb6af34aa475806619d48ff8cee2b4da"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.882304 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.883846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" event={"ID":"d7227f96-5855-4699-bc6b-b36309541fb9","Type":"ContainerStarted","Data":"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.883981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.884878 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79f508ad-6f43-451b-b4f2-2250754b5b1c","Type":"ContainerStarted","Data":"65425383d58e596841a341b532e28f79042bdf1cdde858eca6270144360f4847"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.886206 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"95b6741a-6d2f-45c7-81a1-4e254e9e23f8","Type":"ContainerStarted","Data":"c0aaeda0d313470577ecf1fd1ebc0e47c3aceb6d385e94b4cfe708f9a48750e6"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.887728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59923589-5501-43dc-af74-eb7006f6c427","Type":"ContainerStarted","Data":"95835215a96d9223fada9cdf4ae3d5e1bf695e3312a0ee1c61abc686b7a0dc26"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.887873 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.889018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5czb" event={"ID":"5f410ca3-8151-42b5-9250-837b9444eb7e","Type":"ContainerStarted","Data":"97415f451cf34d1c2f04b94e191be30915bf63a9070784d7d93ff292faa3447d"} Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.889140 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s5czb" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.926087 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s5czb" podStartSLOduration=12.276444909 podStartE2EDuration="24.92606601s" podCreationTimestamp="2026-03-14 09:16:45 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.385833292 +0000 UTC m=+1200.374073667" lastFinishedPulling="2026-03-14 09:17:08.035454393 +0000 UTC m=+1213.023694768" observedRunningTime="2026-03-14 09:17:09.92403965 +0000 UTC m=+1214.912280025" watchObservedRunningTime="2026-03-14 09:17:09.92606601 +0000 UTC m=+1214.914306385" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.945974 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.747493969 podStartE2EDuration="29.945960289s" podCreationTimestamp="2026-03-14 09:16:40 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.368531787 +0000 UTC m=+1200.356772162" lastFinishedPulling="2026-03-14 09:17:07.566998117 +0000 UTC m=+1212.555238482" observedRunningTime="2026-03-14 09:17:09.945957169 +0000 UTC m=+1214.934197544" watchObservedRunningTime="2026-03-14 09:17:09.945960289 +0000 UTC m=+1214.934200664" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.963948 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.115198902 podStartE2EDuration="27.963930651s" podCreationTimestamp="2026-03-14 09:16:42 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.378105621 +0000 UTC m=+1200.366345996" lastFinishedPulling="2026-03-14 09:17:09.22683737 +0000 UTC m=+1214.215077745" observedRunningTime="2026-03-14 09:17:09.960857716 +0000 UTC m=+1214.949098091" watchObservedRunningTime="2026-03-14 09:17:09.963930651 +0000 UTC m=+1214.952171026" Mar 14 09:17:09 crc kubenswrapper[4687]: I0314 09:17:09.989705 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" podStartSLOduration=33.81825301 podStartE2EDuration="33.989688974s" podCreationTimestamp="2026-03-14 09:16:36 +0000 UTC" firstStartedPulling="2026-03-14 09:16:57.781128015 +0000 UTC m=+1202.769368400" lastFinishedPulling="2026-03-14 09:16:57.952563989 +0000 UTC m=+1202.940804364" observedRunningTime="2026-03-14 09:17:09.986719271 +0000 UTC m=+1214.974959656" watchObservedRunningTime="2026-03-14 09:17:09.989688974 +0000 UTC m=+1214.977929339" Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.007524 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" podStartSLOduration=31.391625336 podStartE2EDuration="34.007506372s" podCreationTimestamp="2026-03-14 09:16:36 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.358179802 +0000 UTC m=+1200.346420177" lastFinishedPulling="2026-03-14 09:16:57.974060838 +0000 UTC m=+1202.962301213" observedRunningTime="2026-03-14 09:17:10.007221065 +0000 UTC m=+1214.995461450" watchObservedRunningTime="2026-03-14 09:17:10.007506372 +0000 UTC m=+1214.995746747" Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.065003 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.077880 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b5d9c497-79tqq"] Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.901327 4687 generic.go:334] "Generic (PLEG): container finished" podID="af0f78a2-052f-428c-8b71-425a477a00bd" containerID="a1ebe98bee9829456cee7cb8dd80f30f9b8d456f08b650cf3e1fcc5e1d740d71" exitCode=0 Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.901394 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sfclc" event={"ID":"af0f78a2-052f-428c-8b71-425a477a00bd","Type":"ContainerDied","Data":"a1ebe98bee9829456cee7cb8dd80f30f9b8d456f08b650cf3e1fcc5e1d740d71"} Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.904912 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce12a318-4d43-442e-9621-690da5f189eb" containerID="b17fc9752eeb24561aa3a6af7a7ffe43b16a653e91c06c1042607f0e0fb0daf7" exitCode=0 Mar 14 09:17:10 crc kubenswrapper[4687]: I0314 09:17:10.905161 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce12a318-4d43-442e-9621-690da5f189eb","Type":"ContainerDied","Data":"b17fc9752eeb24561aa3a6af7a7ffe43b16a653e91c06c1042607f0e0fb0daf7"} Mar 14 09:17:11 crc kubenswrapper[4687]: I0314 09:17:11.756142 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a029a97-a503-4a10-a68c-8b145b6fc798" path="/var/lib/kubelet/pods/3a029a97-a503-4a10-a68c-8b145b6fc798/volumes" Mar 14 09:17:11 crc kubenswrapper[4687]: I0314 09:17:11.916405 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd37e7dc-9797-42c5-865f-832412233c32" containerID="cdcf48b2ddcdcb93952109e898910ddd6fa31e1bab0db841b5ff39ca3fe35958" exitCode=0 Mar 14 09:17:11 crc kubenswrapper[4687]: I0314 09:17:11.916491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cd37e7dc-9797-42c5-865f-832412233c32","Type":"ContainerDied","Data":"cdcf48b2ddcdcb93952109e898910ddd6fa31e1bab0db841b5ff39ca3fe35958"} Mar 14 09:17:11 crc kubenswrapper[4687]: I0314 09:17:11.920859 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerStarted","Data":"a381ed9c3389a03b3f706c329af917bcaf237ea80de7fb593ca12a0ee32a3963"} Mar 14 09:17:12 crc kubenswrapper[4687]: I0314 09:17:12.944257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sfclc" event={"ID":"af0f78a2-052f-428c-8b71-425a477a00bd","Type":"ContainerStarted","Data":"0675b93606bb68f80dcc229a99aad966e48b18d071f25c240ce0ce0e3bfc3e40"} Mar 14 09:17:12 crc kubenswrapper[4687]: I0314 09:17:12.946939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce12a318-4d43-442e-9621-690da5f189eb","Type":"ContainerStarted","Data":"0259ffb027e150fe655398ef21e8e5c82212a016e2ab12286feccfdd3d2c025d"} Mar 14 09:17:12 crc kubenswrapper[4687]: I0314 09:17:12.951138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cd37e7dc-9797-42c5-865f-832412233c32","Type":"ContainerStarted","Data":"1c1cc37484a06773eb7707892afbd38874a77917eeb371cf5e27fb4c9b154ea0"} Mar 14 09:17:12 crc kubenswrapper[4687]: I0314 09:17:12.981958 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.381709106 podStartE2EDuration="33.981936001s" podCreationTimestamp="2026-03-14 09:16:39 +0000 UTC" firstStartedPulling="2026-03-14 09:16:53.352480867 +0000 UTC m=+1198.340721242" lastFinishedPulling="2026-03-14 09:16:57.952707762 +0000 UTC m=+1202.940948137" observedRunningTime="2026-03-14 09:17:12.970798748 +0000 UTC m=+1217.959039123" watchObservedRunningTime="2026-03-14 09:17:12.981936001 +0000 UTC m=+1217.970176376" Mar 14 09:17:12 crc kubenswrapper[4687]: I0314 09:17:12.995237 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=32.383396224 podStartE2EDuration="34.995222059s" podCreationTimestamp="2026-03-14 09:16:38 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.360324635 +0000 UTC m=+1200.348565010" lastFinishedPulling="2026-03-14 09:16:57.97215046 +0000 UTC m=+1202.960390845" observedRunningTime="2026-03-14 09:17:12.993627829 +0000 UTC m=+1217.981868254" watchObservedRunningTime="2026-03-14 09:17:12.995222059 +0000 UTC m=+1217.983462434" Mar 14 09:17:13 crc kubenswrapper[4687]: I0314 09:17:13.959161 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sfclc" event={"ID":"af0f78a2-052f-428c-8b71-425a477a00bd","Type":"ContainerStarted","Data":"2c974cde1712aff2737a53130dc5670a1686bd440eb7f4ae48ecd17d65c4a487"} Mar 14 09:17:13 crc kubenswrapper[4687]: I0314 09:17:13.959523 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:17:13 crc kubenswrapper[4687]: I0314 09:17:13.962409 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"95b6741a-6d2f-45c7-81a1-4e254e9e23f8","Type":"ContainerStarted","Data":"913904b18e744c07726f88b0559b76b4567565d48a0e351076dcfe3f21211302"} Mar 14 09:17:13 crc kubenswrapper[4687]: I0314 09:17:13.964610 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79f508ad-6f43-451b-b4f2-2250754b5b1c","Type":"ContainerStarted","Data":"f7a424a1e2d2f9bc10025f4854986ea85364bec288ab677af514184129f7a869"} Mar 14 09:17:13 crc kubenswrapper[4687]: I0314 09:17:13.981789 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sfclc" podStartSLOduration=18.97995709 podStartE2EDuration="28.981772241s" podCreationTimestamp="2026-03-14 09:16:45 +0000 UTC" firstStartedPulling="2026-03-14 09:16:57.843669252 +0000 UTC m=+1202.831909627" lastFinishedPulling="2026-03-14 09:17:07.845484403 +0000 UTC m=+1212.833724778" observedRunningTime="2026-03-14 09:17:13.978380008 +0000 UTC m=+1218.966620383" watchObservedRunningTime="2026-03-14 09:17:13.981772241 +0000 UTC m=+1218.970012616" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.001997 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.521346346 podStartE2EDuration="25.001975518s" podCreationTimestamp="2026-03-14 09:16:49 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.395867408 +0000 UTC m=+1200.384107773" lastFinishedPulling="2026-03-14 09:17:12.87649657 +0000 UTC m=+1217.864736945" observedRunningTime="2026-03-14 09:17:14.000534462 +0000 UTC m=+1218.988774837" watchObservedRunningTime="2026-03-14 09:17:14.001975518 +0000 UTC m=+1218.990215893" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.021432 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.543801549 podStartE2EDuration="29.021410066s" podCreationTimestamp="2026-03-14 09:16:45 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.386470947 +0000 UTC m=+1200.374711322" lastFinishedPulling="2026-03-14 09:17:12.864079454 +0000 UTC m=+1217.852319839" observedRunningTime="2026-03-14 09:17:14.019616711 +0000 UTC m=+1219.007857166" watchObservedRunningTime="2026-03-14 09:17:14.021410066 +0000 UTC m=+1219.009650441" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.589620 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.644217 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.973122 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:14 crc kubenswrapper[4687]: I0314 09:17:14.973543 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.021315 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.279267 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.279563 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="dnsmasq-dns" containerID="cri-o://01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a" gracePeriod=10 Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.280476 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.321628 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:15 crc kubenswrapper[4687]: E0314 09:17:15.322027 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a029a97-a503-4a10-a68c-8b145b6fc798" containerName="init" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.322048 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a029a97-a503-4a10-a68c-8b145b6fc798" containerName="init" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.322233 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a029a97-a503-4a10-a68c-8b145b6fc798" containerName="init" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.323076 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.325310 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.338743 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.409561 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nm5b2"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.410803 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.416042 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.432188 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nm5b2"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.456769 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.457001 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.457127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrz6\" (UniqueName: \"kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.457274 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.558756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.559013 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.560130 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.561885 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plrz6\" (UniqueName: \"kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.562055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovn-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.562151 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jpk\" (UniqueName: \"kubernetes.io/projected/3a8d425d-cf87-4065-aaf4-8633e9387048-kube-api-access-d6jpk\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.562297 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovs-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.565625 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a8d425d-cf87-4065-aaf4-8633e9387048-config\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.565863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.583268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.583159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.583326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.584123 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.592924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrz6\" (UniqueName: \"kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6\") pod \"dnsmasq-dns-65cc89f4b9-n6wwq\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.643035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.645771 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.645992 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="dnsmasq-dns" containerID="cri-o://a41ac4a49134afb05d24a5959e88f1f9bb6af34aa475806619d48ff8cee2b4da" gracePeriod=10 Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.648643 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.676020 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.677602 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.683285 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovn-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jpk\" (UniqueName: \"kubernetes.io/projected/3a8d425d-cf87-4065-aaf4-8633e9387048-kube-api-access-d6jpk\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovs-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686492 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a8d425d-cf87-4065-aaf4-8633e9387048-config\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.686523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.687990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovs-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.688026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3a8d425d-cf87-4065-aaf4-8633e9387048-ovn-rundir\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.688582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a8d425d-cf87-4065-aaf4-8633e9387048-config\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.690421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-combined-ca-bundle\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.705814 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a8d425d-cf87-4065-aaf4-8633e9387048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.715049 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.715975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jpk\" (UniqueName: \"kubernetes.io/projected/3a8d425d-cf87-4065-aaf4-8633e9387048-kube-api-access-d6jpk\") pod \"ovn-controller-metrics-nm5b2\" (UID: \"3a8d425d-cf87-4065-aaf4-8633e9387048\") " pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.759668 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nm5b2" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.787722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.787770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnxn\" (UniqueName: \"kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.787962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.788131 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.788194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.844288 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.903236 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.904387 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnxn\" (UniqueName: \"kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.904455 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.904607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.904840 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.904962 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.905789 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.906007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.910618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.930483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnxn\" (UniqueName: \"kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn\") pod \"dnsmasq-dns-d8546fc6f-f8jpp\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.981392 4687 generic.go:334] "Generic (PLEG): container finished" podID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerID="a41ac4a49134afb05d24a5959e88f1f9bb6af34aa475806619d48ff8cee2b4da" exitCode=0 Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.981443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" event={"ID":"d8ccc418-2d3f-4d06-a65d-b3eb590967c4","Type":"ContainerDied","Data":"a41ac4a49134afb05d24a5959e88f1f9bb6af34aa475806619d48ff8cee2b4da"} Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.982856 4687 generic.go:334] "Generic (PLEG): container finished" podID="d7227f96-5855-4699-bc6b-b36309541fb9" containerID="01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a" exitCode=0 Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.983642 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.984002 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" event={"ID":"d7227f96-5855-4699-bc6b-b36309541fb9","Type":"ContainerDied","Data":"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a"} Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.984030 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64944fc74f-h72lx" event={"ID":"d7227f96-5855-4699-bc6b-b36309541fb9","Type":"ContainerDied","Data":"9f781556e966fc3228f3381a20a376e8535b0ddd99c1062444a093d5e023aa73"} Mar 14 09:17:15 crc kubenswrapper[4687]: I0314 09:17:15.984049 4687 scope.go:117] "RemoveContainer" containerID="01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.005056 4687 scope.go:117] "RemoveContainer" containerID="2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.005816 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqk9r\" (UniqueName: \"kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r\") pod \"d7227f96-5855-4699-bc6b-b36309541fb9\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.005930 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config\") pod \"d7227f96-5855-4699-bc6b-b36309541fb9\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.006028 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc\") pod \"d7227f96-5855-4699-bc6b-b36309541fb9\" (UID: \"d7227f96-5855-4699-bc6b-b36309541fb9\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.009578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r" (OuterVolumeSpecName: "kube-api-access-rqk9r") pod "d7227f96-5855-4699-bc6b-b36309541fb9" (UID: "d7227f96-5855-4699-bc6b-b36309541fb9"). InnerVolumeSpecName "kube-api-access-rqk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.027977 4687 scope.go:117] "RemoveContainer" containerID="01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.044265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.048728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config" (OuterVolumeSpecName: "config") pod "d7227f96-5855-4699-bc6b-b36309541fb9" (UID: "d7227f96-5855-4699-bc6b-b36309541fb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: E0314 09:17:16.054908 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a\": container with ID starting with 01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a not found: ID does not exist" containerID="01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.055076 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a"} err="failed to get container status \"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a\": rpc error: code = NotFound desc = could not find container \"01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a\": container with ID starting with 01056dad81e970bd43fa94763381700b72c788e771528f97aac663e03e8ade2a not found: ID does not exist" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.055178 4687 scope.go:117] "RemoveContainer" containerID="2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d" Mar 14 09:17:16 crc kubenswrapper[4687]: E0314 09:17:16.055789 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d\": container with ID starting with 2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d not found: ID does not exist" containerID="2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.055838 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d"} err="failed to get container status \"2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d\": rpc error: code = NotFound desc = could not find container \"2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d\": container with ID starting with 2a4f165218213f0e2f3160a86cbd4967b32e490f99a7c6f97cc2992d09575b2d not found: ID does not exist" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.112318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7227f96-5855-4699-bc6b-b36309541fb9" (UID: "d7227f96-5855-4699-bc6b-b36309541fb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.123021 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.123051 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7227f96-5855-4699-bc6b-b36309541fb9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.123061 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqk9r\" (UniqueName: \"kubernetes.io/projected/d7227f96-5855-4699-bc6b-b36309541fb9-kube-api-access-rqk9r\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.142704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.206006 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.244764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:16 crc kubenswrapper[4687]: W0314 09:17:16.258132 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18696995_d9d1_4f24_8e5c_3fda320cdc6e.slice/crio-9a77fa795b92714c068ddfdce6f89e5567b9e76e61b8270be2071b4e4b3dd107 WatchSource:0}: Error finding container 9a77fa795b92714c068ddfdce6f89e5567b9e76e61b8270be2071b4e4b3dd107: Status 404 returned error can't find the container with id 9a77fa795b92714c068ddfdce6f89e5567b9e76e61b8270be2071b4e4b3dd107 Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.261517 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nm5b2"] Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.322496 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.329197 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64944fc74f-h72lx"] Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.339702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config\") pod \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.339790 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mx74\" (UniqueName: \"kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74\") pod \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.339821 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc\") pod \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\" (UID: \"d8ccc418-2d3f-4d06-a65d-b3eb590967c4\") " Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.352535 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74" (OuterVolumeSpecName: "kube-api-access-4mx74") pod "d8ccc418-2d3f-4d06-a65d-b3eb590967c4" (UID: "d8ccc418-2d3f-4d06-a65d-b3eb590967c4"). InnerVolumeSpecName "kube-api-access-4mx74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.382846 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config" (OuterVolumeSpecName: "config") pod "d8ccc418-2d3f-4d06-a65d-b3eb590967c4" (UID: "d8ccc418-2d3f-4d06-a65d-b3eb590967c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.405133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8ccc418-2d3f-4d06-a65d-b3eb590967c4" (UID: "d8ccc418-2d3f-4d06-a65d-b3eb590967c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.442475 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.442501 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mx74\" (UniqueName: \"kubernetes.io/projected/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-kube-api-access-4mx74\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.442512 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8ccc418-2d3f-4d06-a65d-b3eb590967c4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.667035 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:16 crc kubenswrapper[4687]: W0314 09:17:16.676234 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c49079_9f40_4689_bba7_e120bc1455d7.slice/crio-9671552df97266e0cebddb41bfe103cb00d181ccdf9627c9842127242f9751f3 WatchSource:0}: Error finding container 9671552df97266e0cebddb41bfe103cb00d181ccdf9627c9842127242f9751f3: Status 404 returned error can't find the container with id 9671552df97266e0cebddb41bfe103cb00d181ccdf9627c9842127242f9751f3 Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.956930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.957283 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.991940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nm5b2" event={"ID":"3a8d425d-cf87-4065-aaf4-8633e9387048","Type":"ContainerStarted","Data":"09e749bd9c624d93b1db44c9b40c13b03d570c90efa8bcddf642812c54b3c091"} Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.991981 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nm5b2" event={"ID":"3a8d425d-cf87-4065-aaf4-8633e9387048","Type":"ContainerStarted","Data":"cc433f3de87ad74cb7eb7a8fc5f9b345fc1fae37628732e062507d73fbc38233"} Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.994994 4687 generic.go:334] "Generic (PLEG): container finished" podID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerID="63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557" exitCode=0 Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.995074 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" event={"ID":"18696995-d9d1-4f24-8e5c-3fda320cdc6e","Type":"ContainerDied","Data":"63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557"} Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.995099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" event={"ID":"18696995-d9d1-4f24-8e5c-3fda320cdc6e","Type":"ContainerStarted","Data":"9a77fa795b92714c068ddfdce6f89e5567b9e76e61b8270be2071b4e4b3dd107"} Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.996960 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.998201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" event={"ID":"d8ccc418-2d3f-4d06-a65d-b3eb590967c4","Type":"ContainerDied","Data":"b802051038d606426b1d37414824a509bff55de207394331713a231dafbf1eec"} Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.998253 4687 scope.go:117] "RemoveContainer" containerID="a41ac4a49134afb05d24a5959e88f1f9bb6af34aa475806619d48ff8cee2b4da" Mar 14 09:17:16 crc kubenswrapper[4687]: I0314 09:17:16.998544 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c96c45f57-wkhq6" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.010706 4687 generic.go:334] "Generic (PLEG): container finished" podID="82c49079-9f40-4689-bba7-e120bc1455d7" containerID="83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c" exitCode=0 Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.011061 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" event={"ID":"82c49079-9f40-4689-bba7-e120bc1455d7","Type":"ContainerDied","Data":"83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c"} Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.011109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" event={"ID":"82c49079-9f40-4689-bba7-e120bc1455d7","Type":"ContainerStarted","Data":"9671552df97266e0cebddb41bfe103cb00d181ccdf9627c9842127242f9751f3"} Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.030495 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nm5b2" podStartSLOduration=2.030432756 podStartE2EDuration="2.030432756s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:17.011272185 +0000 UTC m=+1221.999512590" watchObservedRunningTime="2026-03-14 09:17:17.030432756 +0000 UTC m=+1222.018673131" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.072380 4687 scope.go:117] "RemoveContainer" containerID="59c5d2fe642ca902c606410161652522b0c336d94e73eae35484213377ac9fba" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.079509 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.135235 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.144352 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c96c45f57-wkhq6"] Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.486618 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:17 crc kubenswrapper[4687]: E0314 09:17:17.487207 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487225 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: E0314 09:17:17.487247 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="init" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487254 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="init" Mar 14 09:17:17 crc kubenswrapper[4687]: E0314 09:17:17.487275 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487282 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: E0314 09:17:17.487294 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="init" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487300 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="init" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487465 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.487480 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" containerName="dnsmasq-dns" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.488252 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.494567 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.494729 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.494806 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pj4cj" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.494894 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.503552 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669524 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-config\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-scripts\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669676 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9dt\" (UniqueName: \"kubernetes.io/projected/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-kube-api-access-kc9dt\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669754 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.669781 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.748992 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7227f96-5855-4699-bc6b-b36309541fb9" path="/var/lib/kubelet/pods/d7227f96-5855-4699-bc6b-b36309541fb9/volumes" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.750662 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ccc418-2d3f-4d06-a65d-b3eb590967c4" path="/var/lib/kubelet/pods/d8ccc418-2d3f-4d06-a65d-b3eb590967c4/volumes" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771019 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771079 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771274 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-config\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-scripts\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.771323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9dt\" (UniqueName: \"kubernetes.io/projected/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-kube-api-access-kc9dt\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.772109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.772361 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-scripts\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.772483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-config\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.777018 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.777550 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.790288 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9dt\" (UniqueName: \"kubernetes.io/projected/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-kube-api-access-kc9dt\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.790931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab26cdb3-69d2-4c25-8b3f-e46d5c866df9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9\") " pod="openstack/ovn-northd-0" Mar 14 09:17:17 crc kubenswrapper[4687]: I0314 09:17:17.810445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.021121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" event={"ID":"82c49079-9f40-4689-bba7-e120bc1455d7","Type":"ContainerStarted","Data":"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092"} Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.021683 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.029502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" event={"ID":"18696995-d9d1-4f24-8e5c-3fda320cdc6e","Type":"ContainerStarted","Data":"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608"} Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.039089 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" podStartSLOduration=3.039073721 podStartE2EDuration="3.039073721s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:18.037211685 +0000 UTC m=+1223.025452070" watchObservedRunningTime="2026-03-14 09:17:18.039073721 +0000 UTC m=+1223.027314096" Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.059876 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" podStartSLOduration=3.059859282 podStartE2EDuration="3.059859282s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:18.056205542 +0000 UTC m=+1223.044445917" watchObservedRunningTime="2026-03-14 09:17:18.059859282 +0000 UTC m=+1223.048099657" Mar 14 09:17:18 crc kubenswrapper[4687]: I0314 09:17:18.236387 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:18 crc kubenswrapper[4687]: W0314 09:17:18.248975 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab26cdb3_69d2_4c25_8b3f_e46d5c866df9.slice/crio-5392bef7ddfea10224e71495651efddb9a2feea982a344237fda90a91727411f WatchSource:0}: Error finding container 5392bef7ddfea10224e71495651efddb9a2feea982a344237fda90a91727411f: Status 404 returned error can't find the container with id 5392bef7ddfea10224e71495651efddb9a2feea982a344237fda90a91727411f Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.038078 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9","Type":"ContainerStarted","Data":"dea9d7aa9144d0315066d4f719d78d36cf50883b6783974e073d4f6fbb9fdfd0"} Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.038402 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9","Type":"ContainerStarted","Data":"5392bef7ddfea10224e71495651efddb9a2feea982a344237fda90a91727411f"} Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.039708 4687 generic.go:334] "Generic (PLEG): container finished" podID="92acaca1-342f-4033-9247-07768c100649" containerID="a381ed9c3389a03b3f706c329af917bcaf237ea80de7fb593ca12a0ee32a3963" exitCode=0 Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.039752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerDied","Data":"a381ed9c3389a03b3f706c329af917bcaf237ea80de7fb593ca12a0ee32a3963"} Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.040392 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:19 crc kubenswrapper[4687]: E0314 09:17:19.053758 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.219:40644->38.102.83.219:34301: write tcp 38.102.83.219:40644->38.102.83.219:34301: write: broken pipe Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.517400 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 14 09:17:19 crc kubenswrapper[4687]: I0314 09:17:19.517486 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 14 09:17:20 crc kubenswrapper[4687]: I0314 09:17:20.050357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab26cdb3-69d2-4c25-8b3f-e46d5c866df9","Type":"ContainerStarted","Data":"4a5c68cc1bace9b0ef7c49a5f698a3d55e82cbe83cc4c3e7bf3ad30dae6f1a67"} Mar 14 09:17:20 crc kubenswrapper[4687]: I0314 09:17:20.050801 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 14 09:17:20 crc kubenswrapper[4687]: I0314 09:17:20.093437 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.577333416 podStartE2EDuration="3.093416182s" podCreationTimestamp="2026-03-14 09:17:17 +0000 UTC" firstStartedPulling="2026-03-14 09:17:18.251169395 +0000 UTC m=+1223.239409770" lastFinishedPulling="2026-03-14 09:17:18.767252151 +0000 UTC m=+1223.755492536" observedRunningTime="2026-03-14 09:17:20.084272047 +0000 UTC m=+1225.072512432" watchObservedRunningTime="2026-03-14 09:17:20.093416182 +0000 UTC m=+1225.081656567" Mar 14 09:17:20 crc kubenswrapper[4687]: I0314 09:17:20.797087 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:20 crc kubenswrapper[4687]: I0314 09:17:20.797143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:21 crc kubenswrapper[4687]: I0314 09:17:21.393172 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:21 crc kubenswrapper[4687]: I0314 09:17:21.445072 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 14 09:17:21 crc kubenswrapper[4687]: I0314 09:17:21.521380 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:21 crc kubenswrapper[4687]: I0314 09:17:21.600963 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.018133 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ff42-account-create-update-bhzs2"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.019628 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.021506 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.026560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ff42-account-create-update-bhzs2"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.056176 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mhb7p"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.057412 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.075247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mhb7p"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.155091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.155172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hr2q\" (UniqueName: \"kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.155473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.155613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98h2\" (UniqueName: \"kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.163710 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sbqbl"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.164891 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.172765 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sbqbl"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.234326 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f783-account-create-update-q84bz"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.235382 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.237777 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.256999 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdsb\" (UniqueName: \"kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f98h2\" (UniqueName: \"kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257190 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hr2q\" (UniqueName: \"kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.257776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f783-account-create-update-q84bz"] Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.258277 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.258529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.279892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hr2q\" (UniqueName: \"kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q\") pod \"keystone-ff42-account-create-update-bhzs2\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.279971 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f98h2\" (UniqueName: \"kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2\") pod \"keystone-db-create-mhb7p\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.344934 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.359814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.359946 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4xqx\" (UniqueName: \"kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.359988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.360026 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdsb\" (UniqueName: \"kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.361448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.380220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdsb\" (UniqueName: \"kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb\") pod \"placement-db-create-sbqbl\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.406698 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.461025 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4xqx\" (UniqueName: \"kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.461086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.461864 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.479630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4xqx\" (UniqueName: \"kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx\") pod \"placement-f783-account-create-update-q84bz\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.488660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.552754 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.819759 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ff42-account-create-update-bhzs2"] Mar 14 09:17:22 crc kubenswrapper[4687]: W0314 09:17:22.828801 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8299fb68_8552_485f_8254_090687b29db6.slice/crio-fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e WatchSource:0}: Error finding container fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e: Status 404 returned error can't find the container with id fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e Mar 14 09:17:22 crc kubenswrapper[4687]: I0314 09:17:22.937583 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mhb7p"] Mar 14 09:17:22 crc kubenswrapper[4687]: W0314 09:17:22.945152 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e14df9_b415_4748_94bf_ad4278477e9d.slice/crio-fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707 WatchSource:0}: Error finding container fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707: Status 404 returned error can't find the container with id fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707 Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.042663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sbqbl"] Mar 14 09:17:23 crc kubenswrapper[4687]: W0314 09:17:23.046262 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0f1160_e2a9_4c87_9a91_24ed3251af19.slice/crio-713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9 WatchSource:0}: Error finding container 713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9: Status 404 returned error can't find the container with id 713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9 Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.055311 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f783-account-create-update-q84bz"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.086688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f783-account-create-update-q84bz" event={"ID":"cf146320-67e6-4f93-9256-84353134846e","Type":"ContainerStarted","Data":"5a7d1e2235ca253014e7641737ccb297819e71b9be062363d30eb80cbcb2b51f"} Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.088647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhb7p" event={"ID":"76e14df9-b415-4748-94bf-ad4278477e9d","Type":"ContainerStarted","Data":"fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707"} Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.090829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sbqbl" event={"ID":"2b0f1160-e2a9-4c87-9a91-24ed3251af19","Type":"ContainerStarted","Data":"713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9"} Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.092581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff42-account-create-update-bhzs2" event={"ID":"8299fb68-8552-485f-8254-090687b29db6","Type":"ContainerStarted","Data":"af9ec2f08d83ffebd4c23940aaeb9417338a9281d7d5b917e7a6eeabbe7b52ed"} Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.092610 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff42-account-create-update-bhzs2" event={"ID":"8299fb68-8552-485f-8254-090687b29db6","Type":"ContainerStarted","Data":"fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e"} Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.105304 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mhb7p" podStartSLOduration=1.105285492 podStartE2EDuration="1.105285492s" podCreationTimestamp="2026-03-14 09:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:23.104998365 +0000 UTC m=+1228.093238740" watchObservedRunningTime="2026-03-14 09:17:23.105285492 +0000 UTC m=+1228.093525867" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.120069 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ff42-account-create-update-bhzs2" podStartSLOduration=2.120047985 podStartE2EDuration="2.120047985s" podCreationTimestamp="2026-03-14 09:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:23.117721657 +0000 UTC m=+1228.105962032" watchObservedRunningTime="2026-03-14 09:17:23.120047985 +0000 UTC m=+1228.108288360" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.275440 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-nhs66"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.276636 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.354928 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-f1ca-account-create-update-rl7f8"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.356237 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.362971 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.368210 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-f1ca-account-create-update-rl7f8"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.381961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.382083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc59d\" (UniqueName: \"kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.398283 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nhs66"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.418968 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.468114 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.468911 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="dnsmasq-dns" containerID="cri-o://9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608" gracePeriod=10 Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.472209 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.484154 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.484204 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc59d\" (UniqueName: \"kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.484250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.484326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhpb\" (UniqueName: \"kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.485223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.501259 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.502684 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.521880 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.543303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc59d\" (UniqueName: \"kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d\") pod \"watcher-db-create-nhs66\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsxk\" (UniqueName: \"kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhpb\" (UniqueName: \"kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586693 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586727 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.586805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.587541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.587641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.607555 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhpb\" (UniqueName: \"kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb\") pod \"watcher-f1ca-account-create-update-rl7f8\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.680894 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.689803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.689886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.689945 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.690217 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsxk\" (UniqueName: \"kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.691007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.691084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.692158 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.692233 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.692893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.716931 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.723221 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsxk\" (UniqueName: \"kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk\") pod \"dnsmasq-dns-6cb4f544d5-hcqft\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:23 crc kubenswrapper[4687]: I0314 09:17:23.835297 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.043038 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.099932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config\") pod \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.100098 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc\") pod \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.100139 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb\") pod \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.100219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plrz6\" (UniqueName: \"kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6\") pod \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\" (UID: \"18696995-d9d1-4f24-8e5c-3fda320cdc6e\") " Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.106350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6" (OuterVolumeSpecName: "kube-api-access-plrz6") pod "18696995-d9d1-4f24-8e5c-3fda320cdc6e" (UID: "18696995-d9d1-4f24-8e5c-3fda320cdc6e"). InnerVolumeSpecName "kube-api-access-plrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.111161 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.111216 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.111258 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.112428 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.112482 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7" gracePeriod=600 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.123747 4687 generic.go:334] "Generic (PLEG): container finished" podID="8299fb68-8552-485f-8254-090687b29db6" containerID="af9ec2f08d83ffebd4c23940aaeb9417338a9281d7d5b917e7a6eeabbe7b52ed" exitCode=0 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.123817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff42-account-create-update-bhzs2" event={"ID":"8299fb68-8552-485f-8254-090687b29db6","Type":"ContainerDied","Data":"af9ec2f08d83ffebd4c23940aaeb9417338a9281d7d5b917e7a6eeabbe7b52ed"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.128086 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf146320-67e6-4f93-9256-84353134846e" containerID="64b7d6d268ab745f4dca34ee046de5affcae1e7c34cf4bffb50bba13a5b2927e" exitCode=0 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.128231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f783-account-create-update-q84bz" event={"ID":"cf146320-67e6-4f93-9256-84353134846e","Type":"ContainerDied","Data":"64b7d6d268ab745f4dca34ee046de5affcae1e7c34cf4bffb50bba13a5b2927e"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.141313 4687 generic.go:334] "Generic (PLEG): container finished" podID="76e14df9-b415-4748-94bf-ad4278477e9d" containerID="bea946123e248cbd13aa174aee3307580b559be2796f899433851cd2698d8f06" exitCode=0 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.141460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhb7p" event={"ID":"76e14df9-b415-4748-94bf-ad4278477e9d","Type":"ContainerDied","Data":"bea946123e248cbd13aa174aee3307580b559be2796f899433851cd2698d8f06"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.144791 4687 generic.go:334] "Generic (PLEG): container finished" podID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerID="9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608" exitCode=0 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.144862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" event={"ID":"18696995-d9d1-4f24-8e5c-3fda320cdc6e","Type":"ContainerDied","Data":"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.144870 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.144898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc89f4b9-n6wwq" event={"ID":"18696995-d9d1-4f24-8e5c-3fda320cdc6e","Type":"ContainerDied","Data":"9a77fa795b92714c068ddfdce6f89e5567b9e76e61b8270be2071b4e4b3dd107"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.144924 4687 scope.go:117] "RemoveContainer" containerID="9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.147228 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b0f1160-e2a9-4c87-9a91-24ed3251af19" containerID="428583a947280295edd8d6c2e684589e749e0b1e4d9fe8e26d0616b323a5b673" exitCode=0 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.147263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sbqbl" event={"ID":"2b0f1160-e2a9-4c87-9a91-24ed3251af19","Type":"ContainerDied","Data":"428583a947280295edd8d6c2e684589e749e0b1e4d9fe8e26d0616b323a5b673"} Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.179510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config" (OuterVolumeSpecName: "config") pod "18696995-d9d1-4f24-8e5c-3fda320cdc6e" (UID: "18696995-d9d1-4f24-8e5c-3fda320cdc6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.182928 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18696995-d9d1-4f24-8e5c-3fda320cdc6e" (UID: "18696995-d9d1-4f24-8e5c-3fda320cdc6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.185942 4687 scope.go:117] "RemoveContainer" containerID="63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.202064 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.202088 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plrz6\" (UniqueName: \"kubernetes.io/projected/18696995-d9d1-4f24-8e5c-3fda320cdc6e-kube-api-access-plrz6\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.202097 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.208803 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nhs66"] Mar 14 09:17:24 crc kubenswrapper[4687]: W0314 09:17:24.231922 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bbc215f_2d93_4a3f_9a7f_bb7eca345909.slice/crio-4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88 WatchSource:0}: Error finding container 4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88: Status 404 returned error can't find the container with id 4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.232403 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18696995-d9d1-4f24-8e5c-3fda320cdc6e" (UID: "18696995-d9d1-4f24-8e5c-3fda320cdc6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.252211 4687 scope.go:117] "RemoveContainer" containerID="9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608" Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.252643 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608\": container with ID starting with 9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608 not found: ID does not exist" containerID="9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.252673 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608"} err="failed to get container status \"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608\": rpc error: code = NotFound desc = could not find container \"9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608\": container with ID starting with 9474ec3d897cf11dc5a176dc97b77ecde96f1e99872f6880a7c16f6fbc7c6608 not found: ID does not exist" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.252692 4687 scope.go:117] "RemoveContainer" containerID="63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557" Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.255433 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557\": container with ID starting with 63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557 not found: ID does not exist" containerID="63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.255465 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557"} err="failed to get container status \"63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557\": rpc error: code = NotFound desc = could not find container \"63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557\": container with ID starting with 63614b2e6992f655716240db8b58eb6b03e8096f3c3434366c88c56d52330557 not found: ID does not exist" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.292130 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-f1ca-account-create-update-rl7f8"] Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.304222 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18696995-d9d1-4f24-8e5c-3fda320cdc6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:24 crc kubenswrapper[4687]: W0314 09:17:24.306631 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf98286_ad74_40b1_87c0_20bcb0881806.slice/crio-c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b WatchSource:0}: Error finding container c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b: Status 404 returned error can't find the container with id c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.417321 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:17:24 crc kubenswrapper[4687]: W0314 09:17:24.454256 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85071f99_e190_449d_887e_1a0ac20ca074.slice/crio-e47cc2859f8b51eb5e73b9a43330415d035c318642a93cae2d403fa134bc65a1 WatchSource:0}: Error finding container e47cc2859f8b51eb5e73b9a43330415d035c318642a93cae2d403fa134bc65a1: Status 404 returned error can't find the container with id e47cc2859f8b51eb5e73b9a43330415d035c318642a93cae2d403fa134bc65a1 Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.559124 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.559587 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="init" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.559612 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="init" Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.559647 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="dnsmasq-dns" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.559656 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="dnsmasq-dns" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.559872 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" containerName="dnsmasq-dns" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.583080 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.583236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.598029 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.598353 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.598591 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.598782 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7xtzj" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.678432 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.688878 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65cc89f4b9-n6wwq"] Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711656 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b161d-a32b-4bb8-b947-455a1f17aa59-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-cache\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711725 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-lock\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.711745 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4js\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-kube-api-access-jn4js\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-cache\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813793 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-lock\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813811 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4js\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-kube-api-access-jn4js\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813893 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.813953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b161d-a32b-4bb8-b947-455a1f17aa59-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.814276 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-lock\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.814382 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.814396 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.814420 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa2b161d-a32b-4bb8-b947-455a1f17aa59-cache\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: E0314 09:17:24.814440 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:25.314420387 +0000 UTC m=+1230.302660762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.814756 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.819237 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b161d-a32b-4bb8-b947-455a1f17aa59-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.831200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4js\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-kube-api-access-jn4js\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:24 crc kubenswrapper[4687]: I0314 09:17:24.842371 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.156785 4687 generic.go:334] "Generic (PLEG): container finished" podID="1bbc215f-2d93-4a3f-9a7f-bb7eca345909" containerID="835eed39430896719b6e962315eabb056eb265478c3ee057960d0a9e60307ac6" exitCode=0 Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.156857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nhs66" event={"ID":"1bbc215f-2d93-4a3f-9a7f-bb7eca345909","Type":"ContainerDied","Data":"835eed39430896719b6e962315eabb056eb265478c3ee057960d0a9e60307ac6"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.157096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nhs66" event={"ID":"1bbc215f-2d93-4a3f-9a7f-bb7eca345909","Type":"ContainerStarted","Data":"4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.163814 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7" exitCode=0 Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.163850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.163887 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.163925 4687 scope.go:117] "RemoveContainer" containerID="3f33304c528fb850897998dea6970fcf4eb449229365646e68712c46edf91d2b" Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.177489 4687 generic.go:334] "Generic (PLEG): container finished" podID="bdf98286-ad74-40b1-87c0-20bcb0881806" containerID="385e1ba082f845a19209eb2104641fab1b2134e01e35b586dd3848fe41063f8b" exitCode=0 Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.177587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-f1ca-account-create-update-rl7f8" event={"ID":"bdf98286-ad74-40b1-87c0-20bcb0881806","Type":"ContainerDied","Data":"385e1ba082f845a19209eb2104641fab1b2134e01e35b586dd3848fe41063f8b"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.177624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-f1ca-account-create-update-rl7f8" event={"ID":"bdf98286-ad74-40b1-87c0-20bcb0881806","Type":"ContainerStarted","Data":"c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.182061 4687 generic.go:334] "Generic (PLEG): container finished" podID="85071f99-e190-449d-887e-1a0ac20ca074" containerID="2fdc091cb12ef27cdae416a588ee3bdd3e9c67889dc8d7d79bbd04c13ff5eb72" exitCode=0 Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.183038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" event={"ID":"85071f99-e190-449d-887e-1a0ac20ca074","Type":"ContainerDied","Data":"2fdc091cb12ef27cdae416a588ee3bdd3e9c67889dc8d7d79bbd04c13ff5eb72"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.183065 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" event={"ID":"85071f99-e190-449d-887e-1a0ac20ca074","Type":"ContainerStarted","Data":"e47cc2859f8b51eb5e73b9a43330415d035c318642a93cae2d403fa134bc65a1"} Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.331758 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:25 crc kubenswrapper[4687]: E0314 09:17:25.332121 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:25 crc kubenswrapper[4687]: E0314 09:17:25.332159 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:25 crc kubenswrapper[4687]: E0314 09:17:25.332218 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:26.332195965 +0000 UTC m=+1231.320436350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:25 crc kubenswrapper[4687]: I0314 09:17:25.748134 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18696995-d9d1-4f24-8e5c-3fda320cdc6e" path="/var/lib/kubelet/pods/18696995-d9d1-4f24-8e5c-3fda320cdc6e/volumes" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.144503 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.213685 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-84mgc"] Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.226091 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.250064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nf52\" (UniqueName: \"kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.250200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.252248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-84mgc"] Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.351640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.351697 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.351774 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nf52\" (UniqueName: \"kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: E0314 09:17:26.352149 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:26 crc kubenswrapper[4687]: E0314 09:17:26.352189 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:26 crc kubenswrapper[4687]: E0314 09:17:26.352235 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:28.352219591 +0000 UTC m=+1233.340459966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.352745 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.379034 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nf52\" (UniqueName: \"kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52\") pod \"glance-db-create-84mgc\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.468070 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5c0d-account-create-update-flcfj"] Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.469139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.480524 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.513391 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5c0d-account-create-update-flcfj"] Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.548942 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-84mgc" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.665050 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mch68\" (UniqueName: \"kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.665104 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.766340 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mch68\" (UniqueName: \"kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.766400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.773554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.782462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mch68\" (UniqueName: \"kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68\") pod \"glance-5c0d-account-create-update-flcfj\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:26 crc kubenswrapper[4687]: I0314 09:17:26.794433 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.073225 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lljh7"] Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.075183 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.077451 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.085465 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lljh7"] Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.191741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zng9z\" (UniqueName: \"kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.191874 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.292833 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zng9z\" (UniqueName: \"kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.292882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.293818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.309831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zng9z\" (UniqueName: \"kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z\") pod \"root-account-create-update-lljh7\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.394396 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:28 crc kubenswrapper[4687]: E0314 09:17:28.394635 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:28 crc kubenswrapper[4687]: E0314 09:17:28.394679 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:28 crc kubenswrapper[4687]: E0314 09:17:28.394767 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:32.394741061 +0000 UTC m=+1237.382981466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.423042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.484627 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jnnjj"] Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.485687 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.487616 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.487886 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.496631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jnnjj"] Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.502596 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597742 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p25q\" (UniqueName: \"kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597768 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.597839 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699691 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p25q\" (UniqueName: \"kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.699913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.700527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.700656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.701031 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.703629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.704262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.706655 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.720568 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p25q\" (UniqueName: \"kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q\") pod \"swift-ring-rebalance-jnnjj\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:28 crc kubenswrapper[4687]: I0314 09:17:28.813167 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.814020 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.824429 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.923083 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts\") pod \"bdf98286-ad74-40b1-87c0-20bcb0881806\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.924170 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdf98286-ad74-40b1-87c0-20bcb0881806" (UID: "bdf98286-ad74-40b1-87c0-20bcb0881806"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.925578 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts\") pod \"cf146320-67e6-4f93-9256-84353134846e\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.925637 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qhpb\" (UniqueName: \"kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb\") pod \"bdf98286-ad74-40b1-87c0-20bcb0881806\" (UID: \"bdf98286-ad74-40b1-87c0-20bcb0881806\") " Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.925662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4xqx\" (UniqueName: \"kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx\") pod \"cf146320-67e6-4f93-9256-84353134846e\" (UID: \"cf146320-67e6-4f93-9256-84353134846e\") " Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.926314 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf98286-ad74-40b1-87c0-20bcb0881806-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.926623 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf146320-67e6-4f93-9256-84353134846e" (UID: "cf146320-67e6-4f93-9256-84353134846e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.929457 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb" (OuterVolumeSpecName: "kube-api-access-4qhpb") pod "bdf98286-ad74-40b1-87c0-20bcb0881806" (UID: "bdf98286-ad74-40b1-87c0-20bcb0881806"). InnerVolumeSpecName "kube-api-access-4qhpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:29 crc kubenswrapper[4687]: I0314 09:17:29.939630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx" (OuterVolumeSpecName: "kube-api-access-s4xqx") pod "cf146320-67e6-4f93-9256-84353134846e" (UID: "cf146320-67e6-4f93-9256-84353134846e"). InnerVolumeSpecName "kube-api-access-s4xqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.028279 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qhpb\" (UniqueName: \"kubernetes.io/projected/bdf98286-ad74-40b1-87c0-20bcb0881806-kube-api-access-4qhpb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.028604 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4xqx\" (UniqueName: \"kubernetes.io/projected/cf146320-67e6-4f93-9256-84353134846e-kube-api-access-s4xqx\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.028615 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf146320-67e6-4f93-9256-84353134846e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.051968 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.068643 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.089594 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.101649 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129130 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hr2q\" (UniqueName: \"kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q\") pod \"8299fb68-8552-485f-8254-090687b29db6\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129194 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f98h2\" (UniqueName: \"kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2\") pod \"76e14df9-b415-4748-94bf-ad4278477e9d\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc59d\" (UniqueName: \"kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d\") pod \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts\") pod \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdsb\" (UniqueName: \"kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb\") pod \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\" (UID: \"2b0f1160-e2a9-4c87-9a91-24ed3251af19\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts\") pod \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\" (UID: \"1bbc215f-2d93-4a3f-9a7f-bb7eca345909\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129457 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts\") pod \"76e14df9-b415-4748-94bf-ad4278477e9d\" (UID: \"76e14df9-b415-4748-94bf-ad4278477e9d\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.129557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts\") pod \"8299fb68-8552-485f-8254-090687b29db6\" (UID: \"8299fb68-8552-485f-8254-090687b29db6\") " Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.130645 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8299fb68-8552-485f-8254-090687b29db6" (UID: "8299fb68-8552-485f-8254-090687b29db6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.130730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b0f1160-e2a9-4c87-9a91-24ed3251af19" (UID: "2b0f1160-e2a9-4c87-9a91-24ed3251af19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.136343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q" (OuterVolumeSpecName: "kube-api-access-5hr2q") pod "8299fb68-8552-485f-8254-090687b29db6" (UID: "8299fb68-8552-485f-8254-090687b29db6"). InnerVolumeSpecName "kube-api-access-5hr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.136703 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bbc215f-2d93-4a3f-9a7f-bb7eca345909" (UID: "1bbc215f-2d93-4a3f-9a7f-bb7eca345909"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.138512 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76e14df9-b415-4748-94bf-ad4278477e9d" (UID: "76e14df9-b415-4748-94bf-ad4278477e9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.142494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2" (OuterVolumeSpecName: "kube-api-access-f98h2") pod "76e14df9-b415-4748-94bf-ad4278477e9d" (UID: "76e14df9-b415-4748-94bf-ad4278477e9d"). InnerVolumeSpecName "kube-api-access-f98h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.142571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d" (OuterVolumeSpecName: "kube-api-access-zc59d") pod "1bbc215f-2d93-4a3f-9a7f-bb7eca345909" (UID: "1bbc215f-2d93-4a3f-9a7f-bb7eca345909"). InnerVolumeSpecName "kube-api-access-zc59d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.144953 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb" (OuterVolumeSpecName: "kube-api-access-fkdsb") pod "2b0f1160-e2a9-4c87-9a91-24ed3251af19" (UID: "2b0f1160-e2a9-4c87-9a91-24ed3251af19"). InnerVolumeSpecName "kube-api-access-fkdsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.225808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-f1ca-account-create-update-rl7f8" event={"ID":"bdf98286-ad74-40b1-87c0-20bcb0881806","Type":"ContainerDied","Data":"c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.225840 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0810b64993c02be69199e7a362fdc810a5fa0ab36451a1e9860be1d1f97c83b" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.225890 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-f1ca-account-create-update-rl7f8" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.228392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhb7p" event={"ID":"76e14df9-b415-4748-94bf-ad4278477e9d","Type":"ContainerDied","Data":"fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.228435 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea8afd34b966e63a706743bfe55e5fff60d6a72f4dc5f29d426247e366e5707" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.228492 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhb7p" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231343 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f1160-e2a9-4c87-9a91-24ed3251af19-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231371 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdsb\" (UniqueName: \"kubernetes.io/projected/2b0f1160-e2a9-4c87-9a91-24ed3251af19-kube-api-access-fkdsb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231381 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231389 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e14df9-b415-4748-94bf-ad4278477e9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231397 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8299fb68-8552-485f-8254-090687b29db6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231407 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hr2q\" (UniqueName: \"kubernetes.io/projected/8299fb68-8552-485f-8254-090687b29db6-kube-api-access-5hr2q\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231417 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f98h2\" (UniqueName: \"kubernetes.io/projected/76e14df9-b415-4748-94bf-ad4278477e9d-kube-api-access-f98h2\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.231426 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc59d\" (UniqueName: \"kubernetes.io/projected/1bbc215f-2d93-4a3f-9a7f-bb7eca345909-kube-api-access-zc59d\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.238132 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nhs66" event={"ID":"1bbc215f-2d93-4a3f-9a7f-bb7eca345909","Type":"ContainerDied","Data":"4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.238173 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1c0c848a87df946c8edf5dce36c4cce909b135cd624f8a0cdc20e892bd5c88" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.238236 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nhs66" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.245346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sbqbl" event={"ID":"2b0f1160-e2a9-4c87-9a91-24ed3251af19","Type":"ContainerDied","Data":"713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.245377 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713cb6f4b71d5077f0b5711fe6f6160223de88af2f3739e42dcc11b6d27645f9" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.245419 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sbqbl" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.247451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" event={"ID":"85071f99-e190-449d-887e-1a0ac20ca074","Type":"ContainerStarted","Data":"4f561719f09a1bed1bc832f22b5acc97b34eabe4761685555c11ff3e8fe23a08"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.247658 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.257069 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff42-account-create-update-bhzs2" event={"ID":"8299fb68-8552-485f-8254-090687b29db6","Type":"ContainerDied","Data":"fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.257107 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe570777ddb35f4956753dff669c662652a71b2908a4d8ad7e4634b55d67c35e" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.257150 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff42-account-create-update-bhzs2" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.264985 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" podStartSLOduration=7.264969623 podStartE2EDuration="7.264969623s" podCreationTimestamp="2026-03-14 09:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:30.26038186 +0000 UTC m=+1235.248622255" watchObservedRunningTime="2026-03-14 09:17:30.264969623 +0000 UTC m=+1235.253209988" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.268270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f783-account-create-update-q84bz" event={"ID":"cf146320-67e6-4f93-9256-84353134846e","Type":"ContainerDied","Data":"5a7d1e2235ca253014e7641737ccb297819e71b9be062363d30eb80cbcb2b51f"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.268311 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a7d1e2235ca253014e7641737ccb297819e71b9be062363d30eb80cbcb2b51f" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.268383 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f783-account-create-update-q84bz" Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.273412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerStarted","Data":"8eba5653d4d92a4267285543df05027aab8d61ad76f28650755b8d81d34419d6"} Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.341048 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lljh7"] Mar 14 09:17:30 crc kubenswrapper[4687]: W0314 09:17:30.345243 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39932ab2_cef0_43e6_8569_6d94eb96773e.slice/crio-e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746 WatchSource:0}: Error finding container e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746: Status 404 returned error can't find the container with id e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746 Mar 14 09:17:30 crc kubenswrapper[4687]: W0314 09:17:30.450160 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda2793f1_2651_4b4b_ad8c_d7f99e012e42.slice/crio-4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de WatchSource:0}: Error finding container 4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de: Status 404 returned error can't find the container with id 4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.452470 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jnnjj"] Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.460467 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-84mgc"] Mar 14 09:17:30 crc kubenswrapper[4687]: W0314 09:17:30.462917 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2036aaf2_2580_421a_979e_d53fabdfbe83.slice/crio-693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129 WatchSource:0}: Error finding container 693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129: Status 404 returned error can't find the container with id 693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129 Mar 14 09:17:30 crc kubenswrapper[4687]: I0314 09:17:30.470343 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5c0d-account-create-update-flcfj"] Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.284433 4687 generic.go:334] "Generic (PLEG): container finished" podID="39932ab2-cef0-43e6-8569-6d94eb96773e" containerID="3d7fbf752efbfbe638129ff99c6c0aba52b9e58e97675ee51b6d45424a773e69" exitCode=0 Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.284474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lljh7" event={"ID":"39932ab2-cef0-43e6-8569-6d94eb96773e","Type":"ContainerDied","Data":"3d7fbf752efbfbe638129ff99c6c0aba52b9e58e97675ee51b6d45424a773e69"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.284701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lljh7" event={"ID":"39932ab2-cef0-43e6-8569-6d94eb96773e","Type":"ContainerStarted","Data":"e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.287064 4687 generic.go:334] "Generic (PLEG): container finished" podID="2036aaf2-2580-421a-979e-d53fabdfbe83" containerID="02a288cec9d81e4b0ec8aa16be474f15bc7f6ff74d5b9f65fa4e18ca66f82873" exitCode=0 Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.287153 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-84mgc" event={"ID":"2036aaf2-2580-421a-979e-d53fabdfbe83","Type":"ContainerDied","Data":"02a288cec9d81e4b0ec8aa16be474f15bc7f6ff74d5b9f65fa4e18ca66f82873"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.287193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-84mgc" event={"ID":"2036aaf2-2580-421a-979e-d53fabdfbe83","Type":"ContainerStarted","Data":"693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.289415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jnnjj" event={"ID":"da2793f1-2651-4b4b-ad8c-d7f99e012e42","Type":"ContainerStarted","Data":"4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.291487 4687 generic.go:334] "Generic (PLEG): container finished" podID="6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" containerID="71ef6807eeeb630705627bd9dde7ff2f8f22d77f27c08a50797bf38c368c2f87" exitCode=0 Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.291552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5c0d-account-create-update-flcfj" event={"ID":"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5","Type":"ContainerDied","Data":"71ef6807eeeb630705627bd9dde7ff2f8f22d77f27c08a50797bf38c368c2f87"} Mar 14 09:17:31 crc kubenswrapper[4687]: I0314 09:17:31.291639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5c0d-account-create-update-flcfj" event={"ID":"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5","Type":"ContainerStarted","Data":"dac751689132d4cc6478296f27ec7de5c70909ef19c713dcebf3992a4bc9172b"} Mar 14 09:17:32 crc kubenswrapper[4687]: I0314 09:17:32.301292 4687 generic.go:334] "Generic (PLEG): container finished" podID="6785aec9-5237-4c55-9ec3-1d8783495b3a" containerID="885ae97c5814a24669102bfdff3107133f16780644f8c61b5722176415ea815a" exitCode=0 Mar 14 09:17:32 crc kubenswrapper[4687]: I0314 09:17:32.301436 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6785aec9-5237-4c55-9ec3-1d8783495b3a","Type":"ContainerDied","Data":"885ae97c5814a24669102bfdff3107133f16780644f8c61b5722176415ea815a"} Mar 14 09:17:32 crc kubenswrapper[4687]: I0314 09:17:32.471245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:32 crc kubenswrapper[4687]: E0314 09:17:32.472160 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:32 crc kubenswrapper[4687]: E0314 09:17:32.472189 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:32 crc kubenswrapper[4687]: E0314 09:17:32.472237 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:40.472216656 +0000 UTC m=+1245.460457091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.315022 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerStarted","Data":"cc3beb08cddfd05f9d2c853733c851cc60f5b3b88fe997b981af027ad5ba5ce2"} Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.529397 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.557292 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-84mgc" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.592104 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702015 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts\") pod \"2036aaf2-2580-421a-979e-d53fabdfbe83\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702127 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nf52\" (UniqueName: \"kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52\") pod \"2036aaf2-2580-421a-979e-d53fabdfbe83\" (UID: \"2036aaf2-2580-421a-979e-d53fabdfbe83\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zng9z\" (UniqueName: \"kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z\") pod \"39932ab2-cef0-43e6-8569-6d94eb96773e\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts\") pod \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702257 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mch68\" (UniqueName: \"kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68\") pod \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\" (UID: \"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702377 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts\") pod \"39932ab2-cef0-43e6-8569-6d94eb96773e\" (UID: \"39932ab2-cef0-43e6-8569-6d94eb96773e\") " Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.702937 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2036aaf2-2580-421a-979e-d53fabdfbe83" (UID: "2036aaf2-2580-421a-979e-d53fabdfbe83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.703137 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" (UID: "6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.703480 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39932ab2-cef0-43e6-8569-6d94eb96773e" (UID: "39932ab2-cef0-43e6-8569-6d94eb96773e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.703662 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39932ab2-cef0-43e6-8569-6d94eb96773e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.703681 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2036aaf2-2580-421a-979e-d53fabdfbe83-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.703690 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.707265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68" (OuterVolumeSpecName: "kube-api-access-mch68") pod "6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" (UID: "6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5"). InnerVolumeSpecName "kube-api-access-mch68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.708030 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z" (OuterVolumeSpecName: "kube-api-access-zng9z") pod "39932ab2-cef0-43e6-8569-6d94eb96773e" (UID: "39932ab2-cef0-43e6-8569-6d94eb96773e"). InnerVolumeSpecName "kube-api-access-zng9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.708692 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52" (OuterVolumeSpecName: "kube-api-access-4nf52") pod "2036aaf2-2580-421a-979e-d53fabdfbe83" (UID: "2036aaf2-2580-421a-979e-d53fabdfbe83"). InnerVolumeSpecName "kube-api-access-4nf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.806301 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mch68\" (UniqueName: \"kubernetes.io/projected/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5-kube-api-access-mch68\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.806376 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nf52\" (UniqueName: \"kubernetes.io/projected/2036aaf2-2580-421a-979e-d53fabdfbe83-kube-api-access-4nf52\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:33 crc kubenswrapper[4687]: I0314 09:17:33.806393 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zng9z\" (UniqueName: \"kubernetes.io/projected/39932ab2-cef0-43e6-8569-6d94eb96773e-kube-api-access-zng9z\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.338420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5c0d-account-create-update-flcfj" event={"ID":"6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5","Type":"ContainerDied","Data":"dac751689132d4cc6478296f27ec7de5c70909ef19c713dcebf3992a4bc9172b"} Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.338696 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac751689132d4cc6478296f27ec7de5c70909ef19c713dcebf3992a4bc9172b" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.338620 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5c0d-account-create-update-flcfj" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.340759 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6785aec9-5237-4c55-9ec3-1d8783495b3a","Type":"ContainerStarted","Data":"3bac5eba99adaa876400f5be7c4651ecebeb0c8fdd41276f90460af035a0fc5e"} Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.340924 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.343549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lljh7" event={"ID":"39932ab2-cef0-43e6-8569-6d94eb96773e","Type":"ContainerDied","Data":"e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746"} Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.343618 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2295ec2211805f2d16ae5326707b34f7bc60010cef79ae0a1b3d95f2825d746" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.343704 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lljh7" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.347380 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-84mgc" event={"ID":"2036aaf2-2580-421a-979e-d53fabdfbe83","Type":"ContainerDied","Data":"693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129"} Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.347429 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693d0a20eebf49f045a1d8b3045eb76d8643260fe3d2200bb7c7afe56e153129" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.347490 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-84mgc" Mar 14 09:17:34 crc kubenswrapper[4687]: I0314 09:17:34.384977 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.291103259 podStartE2EDuration="58.38496127s" podCreationTimestamp="2026-03-14 09:16:36 +0000 UTC" firstStartedPulling="2026-03-14 09:16:52.831097359 +0000 UTC m=+1197.819337734" lastFinishedPulling="2026-03-14 09:16:57.92495537 +0000 UTC m=+1202.913195745" observedRunningTime="2026-03-14 09:17:34.37158372 +0000 UTC m=+1239.359824095" watchObservedRunningTime="2026-03-14 09:17:34.38496127 +0000 UTC m=+1239.373201645" Mar 14 09:17:35 crc kubenswrapper[4687]: I0314 09:17:35.355952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jnnjj" event={"ID":"da2793f1-2651-4b4b-ad8c-d7f99e012e42","Type":"ContainerStarted","Data":"885a230ad597bb33fda05531dd6776005d9d14ee5db66849f77abef38cc7a93c"} Mar 14 09:17:35 crc kubenswrapper[4687]: I0314 09:17:35.376751 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jnnjj" podStartSLOduration=3.574341525 podStartE2EDuration="7.376735555s" podCreationTimestamp="2026-03-14 09:17:28 +0000 UTC" firstStartedPulling="2026-03-14 09:17:30.452675529 +0000 UTC m=+1235.440915904" lastFinishedPulling="2026-03-14 09:17:34.255069559 +0000 UTC m=+1239.243309934" observedRunningTime="2026-03-14 09:17:35.372256135 +0000 UTC m=+1240.360496530" watchObservedRunningTime="2026-03-14 09:17:35.376735555 +0000 UTC m=+1240.364975930" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.366676 4687 generic.go:334] "Generic (PLEG): container finished" podID="4bc54e55-6120-453d-8955-b7f478318618" containerID="15d97e47b4e0ab124300f2e7a4177f49aeb6270c7f2b72a1b6c54a1c56017025" exitCode=0 Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.366730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4bc54e55-6120-453d-8955-b7f478318618","Type":"ContainerDied","Data":"15d97e47b4e0ab124300f2e7a4177f49aeb6270c7f2b72a1b6c54a1c56017025"} Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.377823 4687 generic.go:334] "Generic (PLEG): container finished" podID="83405408-9b65-42fd-955c-952cad220093" containerID="a35d7a7367010c6ee5fceb6e6f64753a6d08882f88aa7775cc2a9bfc042f548f" exitCode=0 Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.377876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"83405408-9b65-42fd-955c-952cad220093","Type":"ContainerDied","Data":"a35d7a7367010c6ee5fceb6e6f64753a6d08882f88aa7775cc2a9bfc042f548f"} Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.626257 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wztwf"] Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.627894 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8299fb68-8552-485f-8254-090687b29db6" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.627954 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8299fb68-8552-485f-8254-090687b29db6" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.628012 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e14df9-b415-4748-94bf-ad4278477e9d" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.628032 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e14df9-b415-4748-94bf-ad4278477e9d" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.628050 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbc215f-2d93-4a3f-9a7f-bb7eca345909" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.628067 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbc215f-2d93-4a3f-9a7f-bb7eca345909" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.628245 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf146320-67e6-4f93-9256-84353134846e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.628268 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf146320-67e6-4f93-9256-84353134846e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.628299 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.628312 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.629102 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39932ab2-cef0-43e6-8569-6d94eb96773e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629118 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="39932ab2-cef0-43e6-8569-6d94eb96773e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.629138 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2036aaf2-2580-421a-979e-d53fabdfbe83" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629157 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2036aaf2-2580-421a-979e-d53fabdfbe83" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.629168 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf98286-ad74-40b1-87c0-20bcb0881806" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629174 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf98286-ad74-40b1-87c0-20bcb0881806" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: E0314 09:17:36.629181 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0f1160-e2a9-4c87-9a91-24ed3251af19" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629189 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0f1160-e2a9-4c87-9a91-24ed3251af19" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629714 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf146320-67e6-4f93-9256-84353134846e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629739 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="39932ab2-cef0-43e6-8569-6d94eb96773e" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629790 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e14df9-b415-4748-94bf-ad4278477e9d" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629810 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2036aaf2-2580-421a-979e-d53fabdfbe83" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629832 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf98286-ad74-40b1-87c0-20bcb0881806" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629845 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8299fb68-8552-485f-8254-090687b29db6" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629866 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" containerName="mariadb-account-create-update" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629875 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbc215f-2d93-4a3f-9a7f-bb7eca345909" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.629890 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0f1160-e2a9-4c87-9a91-24ed3251af19" containerName="mariadb-database-create" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.630796 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.632740 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.634239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4zr4j" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.658931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wztwf"] Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.760062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.760098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.760127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.760170 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thwxx\" (UniqueName: \"kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.861826 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thwxx\" (UniqueName: \"kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.862285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.862405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.862510 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.868455 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.869002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.869037 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.883982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thwxx\" (UniqueName: \"kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx\") pod \"glance-db-sync-wztwf\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:36 crc kubenswrapper[4687]: I0314 09:17:36.951786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wztwf" Mar 14 09:17:37 crc kubenswrapper[4687]: I0314 09:17:37.898514 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.142997 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wztwf"] Mar 14 09:17:38 crc kubenswrapper[4687]: W0314 09:17:38.145539 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38ba5c4b_9a2a_43ce_a6c0_f3488284929c.slice/crio-987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5 WatchSource:0}: Error finding container 987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5: Status 404 returned error can't find the container with id 987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5 Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.395186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"83405408-9b65-42fd-955c-952cad220093","Type":"ContainerStarted","Data":"36186dd3270ab7545e6295a46883a98dc67ec470743e12a61d22243d4029ed96"} Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.396569 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.398529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4bc54e55-6120-453d-8955-b7f478318618","Type":"ContainerStarted","Data":"7eaf275bac6159963c55ca48f748aecfa6f2281c10205beec12397980524b01d"} Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.399291 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.402307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wztwf" event={"ID":"38ba5c4b-9a2a-43ce-a6c0-f3488284929c","Type":"ContainerStarted","Data":"987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5"} Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.404940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerStarted","Data":"6a3b5a5aae8a33ee44c19204110d3fa7536960e8497a5a4e50347e5c82dac341"} Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.453977 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=59.876046869 podStartE2EDuration="1m2.453959661s" podCreationTimestamp="2026-03-14 09:16:36 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.391525382 +0000 UTC m=+1200.379765757" lastFinishedPulling="2026-03-14 09:16:57.969438174 +0000 UTC m=+1202.957678549" observedRunningTime="2026-03-14 09:17:38.424984616 +0000 UTC m=+1243.413224981" watchObservedRunningTime="2026-03-14 09:17:38.453959661 +0000 UTC m=+1243.442200036" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.454863 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.556453115 podStartE2EDuration="55.454855422s" podCreationTimestamp="2026-03-14 09:16:43 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.373802396 +0000 UTC m=+1200.362042771" lastFinishedPulling="2026-03-14 09:17:37.272204703 +0000 UTC m=+1242.260445078" observedRunningTime="2026-03-14 09:17:38.449029169 +0000 UTC m=+1243.437269544" watchObservedRunningTime="2026-03-14 09:17:38.454855422 +0000 UTC m=+1243.443095807" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.484492 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=59.927500347 podStartE2EDuration="1m2.484431302s" podCreationTimestamp="2026-03-14 09:16:36 +0000 UTC" firstStartedPulling="2026-03-14 09:16:55.394531606 +0000 UTC m=+1200.382771971" lastFinishedPulling="2026-03-14 09:16:57.951462551 +0000 UTC m=+1202.939702926" observedRunningTime="2026-03-14 09:17:38.475555323 +0000 UTC m=+1243.463795698" watchObservedRunningTime="2026-03-14 09:17:38.484431302 +0000 UTC m=+1243.472671687" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.837511 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.915577 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:38 crc kubenswrapper[4687]: I0314 09:17:38.925299 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="dnsmasq-dns" containerID="cri-o://976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092" gracePeriod=10 Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.408983 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.414497 4687 generic.go:334] "Generic (PLEG): container finished" podID="82c49079-9f40-4689-bba7-e120bc1455d7" containerID="976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092" exitCode=0 Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.414540 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.414597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" event={"ID":"82c49079-9f40-4689-bba7-e120bc1455d7","Type":"ContainerDied","Data":"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092"} Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.414633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8546fc6f-f8jpp" event={"ID":"82c49079-9f40-4689-bba7-e120bc1455d7","Type":"ContainerDied","Data":"9671552df97266e0cebddb41bfe103cb00d181ccdf9627c9842127242f9751f3"} Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.414674 4687 scope.go:117] "RemoveContainer" containerID="976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.458615 4687 scope.go:117] "RemoveContainer" containerID="83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.480717 4687 scope.go:117] "RemoveContainer" containerID="976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092" Mar 14 09:17:39 crc kubenswrapper[4687]: E0314 09:17:39.481644 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092\": container with ID starting with 976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092 not found: ID does not exist" containerID="976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.481693 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092"} err="failed to get container status \"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092\": rpc error: code = NotFound desc = could not find container \"976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092\": container with ID starting with 976c103b9642d412b582bc5d4b5f99943f717335396984d3d222595fa645a092 not found: ID does not exist" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.481726 4687 scope.go:117] "RemoveContainer" containerID="83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c" Mar 14 09:17:39 crc kubenswrapper[4687]: E0314 09:17:39.482189 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c\": container with ID starting with 83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c not found: ID does not exist" containerID="83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.482226 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c"} err="failed to get container status \"83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c\": rpc error: code = NotFound desc = could not find container \"83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c\": container with ID starting with 83618a9ae85141e2e4f552297b4041f01596e550ab15040fccc00543fcab004c not found: ID does not exist" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.507199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.509766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbnxn\" (UniqueName: \"kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn\") pod \"82c49079-9f40-4689-bba7-e120bc1455d7\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.509819 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config\") pod \"82c49079-9f40-4689-bba7-e120bc1455d7\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.509883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc\") pod \"82c49079-9f40-4689-bba7-e120bc1455d7\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.509975 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb\") pod \"82c49079-9f40-4689-bba7-e120bc1455d7\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.510110 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb\") pod \"82c49079-9f40-4689-bba7-e120bc1455d7\" (UID: \"82c49079-9f40-4689-bba7-e120bc1455d7\") " Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.520865 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn" (OuterVolumeSpecName: "kube-api-access-gbnxn") pod "82c49079-9f40-4689-bba7-e120bc1455d7" (UID: "82c49079-9f40-4689-bba7-e120bc1455d7"). InnerVolumeSpecName "kube-api-access-gbnxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.590824 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82c49079-9f40-4689-bba7-e120bc1455d7" (UID: "82c49079-9f40-4689-bba7-e120bc1455d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.591547 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82c49079-9f40-4689-bba7-e120bc1455d7" (UID: "82c49079-9f40-4689-bba7-e120bc1455d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.592298 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82c49079-9f40-4689-bba7-e120bc1455d7" (UID: "82c49079-9f40-4689-bba7-e120bc1455d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.594918 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config" (OuterVolumeSpecName: "config") pod "82c49079-9f40-4689-bba7-e120bc1455d7" (UID: "82c49079-9f40-4689-bba7-e120bc1455d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.599483 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lljh7"] Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.607620 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lljh7"] Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.612533 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.612709 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.612798 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbnxn\" (UniqueName: \"kubernetes.io/projected/82c49079-9f40-4689-bba7-e120bc1455d7-kube-api-access-gbnxn\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.612879 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.612942 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c49079-9f40-4689-bba7-e120bc1455d7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.747917 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39932ab2-cef0-43e6-8569-6d94eb96773e" path="/var/lib/kubelet/pods/39932ab2-cef0-43e6-8569-6d94eb96773e/volumes" Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.748498 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:39 crc kubenswrapper[4687]: I0314 09:17:39.752832 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d8546fc6f-f8jpp"] Mar 14 09:17:40 crc kubenswrapper[4687]: I0314 09:17:40.526617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:40 crc kubenswrapper[4687]: E0314 09:17:40.526804 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:40 crc kubenswrapper[4687]: E0314 09:17:40.526819 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:40 crc kubenswrapper[4687]: E0314 09:17:40.526853 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift podName:fa2b161d-a32b-4bb8-b947-455a1f17aa59 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:56.526840151 +0000 UTC m=+1261.515080526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift") pod "swift-storage-0" (UID: "fa2b161d-a32b-4bb8-b947-455a1f17aa59") : configmap "swift-ring-files" not found Mar 14 09:17:41 crc kubenswrapper[4687]: I0314 09:17:41.125040 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5czb" podUID="5f410ca3-8151-42b5-9250-837b9444eb7e" containerName="ovn-controller" probeResult="failure" output=< Mar 14 09:17:41 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 09:17:41 crc kubenswrapper[4687]: > Mar 14 09:17:41 crc kubenswrapper[4687]: I0314 09:17:41.747309 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" path="/var/lib/kubelet/pods/82c49079-9f40-4689-bba7-e120bc1455d7/volumes" Mar 14 09:17:43 crc kubenswrapper[4687]: I0314 09:17:43.449956 4687 generic.go:334] "Generic (PLEG): container finished" podID="da2793f1-2651-4b4b-ad8c-d7f99e012e42" containerID="885a230ad597bb33fda05531dd6776005d9d14ee5db66849f77abef38cc7a93c" exitCode=0 Mar 14 09:17:43 crc kubenswrapper[4687]: I0314 09:17:43.450058 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jnnjj" event={"ID":"da2793f1-2651-4b4b-ad8c-d7f99e012e42","Type":"ContainerDied","Data":"885a230ad597bb33fda05531dd6776005d9d14ee5db66849f77abef38cc7a93c"} Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.506961 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.510098 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.584243 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zq8ml"] Mar 14 09:17:44 crc kubenswrapper[4687]: E0314 09:17:44.584589 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.584600 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4687]: E0314 09:17:44.584618 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="init" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.584625 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="init" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.584777 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c49079-9f40-4689-bba7-e120bc1455d7" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.585275 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.590833 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.601184 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zq8ml"] Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.695611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb89\" (UniqueName: \"kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.695698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.797509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwb89\" (UniqueName: \"kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.797691 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.798678 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.822112 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwb89\" (UniqueName: \"kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89\") pod \"root-account-create-update-zq8ml\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:44 crc kubenswrapper[4687]: I0314 09:17:44.910942 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:45 crc kubenswrapper[4687]: I0314 09:17:45.467265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.128188 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5czb" podUID="5f410ca3-8151-42b5-9250-837b9444eb7e" containerName="ovn-controller" probeResult="failure" output=< Mar 14 09:17:46 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 09:17:46 crc kubenswrapper[4687]: > Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.165177 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.176798 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sfclc" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.424786 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s5czb-config-7fpg7"] Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.427665 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.433066 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.447968 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5czb-config-7fpg7"] Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538502 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538518 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.538535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxnk\" (UniqueName: \"kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.640977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.641000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxnk\" (UniqueName: \"kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.641564 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.642077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.645210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.659308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxnk\" (UniqueName: \"kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk\") pod \"ovn-controller-s5czb-config-7fpg7\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:46 crc kubenswrapper[4687]: I0314 09:17:46.766850 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:47 crc kubenswrapper[4687]: I0314 09:17:47.375171 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6785aec9-5237-4c55-9ec3-1d8783495b3a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Mar 14 09:17:47 crc kubenswrapper[4687]: I0314 09:17:47.670648 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4bc54e55-6120-453d-8955-b7f478318618" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.008702 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="83405408-9b65-42fd-955c-952cad220093" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.020574 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.020805 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" containerID="cri-o://8eba5653d4d92a4267285543df05027aab8d61ad76f28650755b8d81d34419d6" gracePeriod=600 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.020886 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="thanos-sidecar" containerID="cri-o://6a3b5a5aae8a33ee44c19204110d3fa7536960e8497a5a4e50347e5c82dac341" gracePeriod=600 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.020886 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="config-reloader" containerID="cri-o://cc3beb08cddfd05f9d2c853733c851cc60f5b3b88fe997b981af027ad5ba5ce2" gracePeriod=600 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501404 4687 generic.go:334] "Generic (PLEG): container finished" podID="92acaca1-342f-4033-9247-07768c100649" containerID="6a3b5a5aae8a33ee44c19204110d3fa7536960e8497a5a4e50347e5c82dac341" exitCode=0 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501656 4687 generic.go:334] "Generic (PLEG): container finished" podID="92acaca1-342f-4033-9247-07768c100649" containerID="cc3beb08cddfd05f9d2c853733c851cc60f5b3b88fe997b981af027ad5ba5ce2" exitCode=0 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501664 4687 generic.go:334] "Generic (PLEG): container finished" podID="92acaca1-342f-4033-9247-07768c100649" containerID="8eba5653d4d92a4267285543df05027aab8d61ad76f28650755b8d81d34419d6" exitCode=0 Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerDied","Data":"6a3b5a5aae8a33ee44c19204110d3fa7536960e8497a5a4e50347e5c82dac341"} Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501707 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerDied","Data":"cc3beb08cddfd05f9d2c853733c851cc60f5b3b88fe997b981af027ad5ba5ce2"} Mar 14 09:17:48 crc kubenswrapper[4687]: I0314 09:17:48.501716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerDied","Data":"8eba5653d4d92a4267285543df05027aab8d61ad76f28650755b8d81d34419d6"} Mar 14 09:17:49 crc kubenswrapper[4687]: I0314 09:17:49.507810 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.116:9090/-/ready\": dial tcp 10.217.0.116:9090: connect: connection refused" Mar 14 09:17:51 crc kubenswrapper[4687]: I0314 09:17:51.126446 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5czb" podUID="5f410ca3-8151-42b5-9250-837b9444eb7e" containerName="ovn-controller" probeResult="failure" output=< Mar 14 09:17:51 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 09:17:51 crc kubenswrapper[4687]: > Mar 14 09:17:54 crc kubenswrapper[4687]: I0314 09:17:54.507954 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.116:9090/-/ready\": dial tcp 10.217.0.116:9090: connect: connection refused" Mar 14 09:17:54 crc kubenswrapper[4687]: E0314 09:17:54.883467 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Mar 14 09:17:54 crc kubenswrapper[4687]: E0314 09:17:54.883868 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Mar 14 09:17:54 crc kubenswrapper[4687]: E0314 09:17:54.884043 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.243:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thwxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wztwf_openstack(38ba5c4b-9a2a-43ce-a6c0-f3488284929c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:17:54 crc kubenswrapper[4687]: E0314 09:17:54.885374 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wztwf" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" Mar 14 09:17:54 crc kubenswrapper[4687]: I0314 09:17:54.988965 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019518 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p25q\" (UniqueName: \"kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.019873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts\") pod \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\" (UID: \"da2793f1-2651-4b4b-ad8c-d7f99e012e42\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.020437 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.021637 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.027302 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q" (OuterVolumeSpecName: "kube-api-access-4p25q") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "kube-api-access-4p25q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.031396 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.049544 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts" (OuterVolumeSpecName: "scripts") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.060781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.065041 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "da2793f1-2651-4b4b-ad8c-d7f99e012e42" (UID: "da2793f1-2651-4b4b-ad8c-d7f99e012e42"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121396 4687 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121425 4687 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121438 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p25q\" (UniqueName: \"kubernetes.io/projected/da2793f1-2651-4b4b-ad8c-d7f99e012e42-kube-api-access-4p25q\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121448 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da2793f1-2651-4b4b-ad8c-d7f99e012e42-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121456 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da2793f1-2651-4b4b-ad8c-d7f99e012e42-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121465 4687 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.121475 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2793f1-2651-4b4b-ad8c-d7f99e012e42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.189721 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225322 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225380 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225401 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225477 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgl5n\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225522 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225546 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225561 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.225696 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"92acaca1-342f-4033-9247-07768c100649\" (UID: \"92acaca1-342f-4033-9247-07768c100649\") " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.226372 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.228664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.229499 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.230222 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.235121 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out" (OuterVolumeSpecName: "config-out") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.235188 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.235223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config" (OuterVolumeSpecName: "config") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.235239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n" (OuterVolumeSpecName: "kube-api-access-bgl5n") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "kube-api-access-bgl5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.255230 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config" (OuterVolumeSpecName: "web-config") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.256105 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "92acaca1-342f-4033-9247-07768c100649" (UID: "92acaca1-342f-4033-9247-07768c100649"). InnerVolumeSpecName "pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328077 4687 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92acaca1-342f-4033-9247-07768c100649-config-out\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328111 4687 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-web-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328121 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328131 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328143 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328152 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgl5n\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-kube-api-access-bgl5n\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328161 4687 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92acaca1-342f-4033-9247-07768c100649-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328170 4687 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92acaca1-342f-4033-9247-07768c100649-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328178 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/92acaca1-342f-4033-9247-07768c100649-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.328216 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") on node \"crc\" " Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.345851 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.346002 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf") on node "crc" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.399279 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zq8ml"] Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.409436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5czb-config-7fpg7"] Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.429871 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.558686 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5czb-config-7fpg7" event={"ID":"5df204be-f201-46a5-a5da-03a21ac1b7b0","Type":"ContainerStarted","Data":"e6ee1193bf7e63b62e538dd80aaf28f0c700720fc7827167f73c0aee928b22f7"} Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.562298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"92acaca1-342f-4033-9247-07768c100649","Type":"ContainerDied","Data":"dccac1b6b8492488a5a46e2e149e31e05ae30a05f82f45d3738c97f64981a73a"} Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.562386 4687 scope.go:117] "RemoveContainer" containerID="6a3b5a5aae8a33ee44c19204110d3fa7536960e8497a5a4e50347e5c82dac341" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.562507 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.569771 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jnnjj" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.569797 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jnnjj" event={"ID":"da2793f1-2651-4b4b-ad8c-d7f99e012e42","Type":"ContainerDied","Data":"4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de"} Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.569833 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8e1b5e738c335bc2df79a3afec06b5cf22ec94ac5a9b6fe7fcd284457577de" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.576944 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq8ml" event={"ID":"9b6cf348-a162-431d-9399-f350f28e5b2d","Type":"ContainerStarted","Data":"b4391fa588f93cd92c1cd6daa837784254f3fc181b3b67ac352a96c8cd9e1af1"} Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.578216 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-wztwf" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.621180 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.636584 4687 scope.go:117] "RemoveContainer" containerID="cc3beb08cddfd05f9d2c853733c851cc60f5b3b88fe997b981af027ad5ba5ce2" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.639443 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.651713 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.652099 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2793f1-2651-4b4b-ad8c-d7f99e012e42" containerName="swift-ring-rebalance" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652115 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2793f1-2651-4b4b-ad8c-d7f99e012e42" containerName="swift-ring-rebalance" Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.652141 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="init-config-reloader" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652147 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="init-config-reloader" Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.652161 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652167 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.652174 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="thanos-sidecar" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652179 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="thanos-sidecar" Mar 14 09:17:55 crc kubenswrapper[4687]: E0314 09:17:55.652193 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="config-reloader" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652199 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="config-reloader" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652383 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="config-reloader" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652395 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="thanos-sidecar" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652406 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2793f1-2651-4b4b-ad8c-d7f99e012e42" containerName="swift-ring-rebalance" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.652414 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="92acaca1-342f-4033-9247-07768c100649" containerName="prometheus" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.655697 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.659382 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.659887 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.660063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-s2k8k" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.660142 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.660150 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.660158 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.660259 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.661064 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.665401 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.670285 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.684522 4687 scope.go:117] "RemoveContainer" containerID="8eba5653d4d92a4267285543df05027aab8d61ad76f28650755b8d81d34419d6" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.724678 4687 scope.go:117] "RemoveContainer" containerID="a381ed9c3389a03b3f706c329af917bcaf237ea80de7fb593ca12a0ee32a3963" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.753516 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92acaca1-342f-4033-9247-07768c100649" path="/var/lib/kubelet/pods/92acaca1-342f-4033-9247-07768c100649/volumes" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.836825 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40c58540-7bfb-429a-bce0-2231dffb158e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.836874 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837607 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837862 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837971 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.837996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.838021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.838075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.838093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.838130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkn6l\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-kube-api-access-bkn6l\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939614 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939808 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkn6l\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-kube-api-access-bkn6l\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40c58540-7bfb-429a-bce0-2231dffb158e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.939983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.943622 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.943644 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.943888 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.944006 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.944067 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.944077 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.944160 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.945890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40c58540-7bfb-429a-bce0-2231dffb158e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.946657 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.946686 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e883ad2e85d4265753417decd9704d55cec1792fde830c419ee7ac911f8f2bd8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.946696 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.947436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.951634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.952832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.954057 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40c58540-7bfb-429a-bce0-2231dffb158e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.954968 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.954981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.956866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-config\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.959137 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.959634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40c58540-7bfb-429a-bce0-2231dffb158e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.976592 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkn6l\" (UniqueName: \"kubernetes.io/projected/40c58540-7bfb-429a-bce0-2231dffb158e-kube-api-access-bkn6l\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:55 crc kubenswrapper[4687]: I0314 09:17:55.985581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1136d4e-9adb-4d31-951a-ec4ccc3374bf\") pod \"prometheus-metric-storage-0\" (UID: \"40c58540-7bfb-429a-bce0-2231dffb158e\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.124953 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5czb" podUID="5f410ca3-8151-42b5-9250-837b9444eb7e" containerName="ovn-controller" probeResult="failure" output=< Mar 14 09:17:56 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 09:17:56 crc kubenswrapper[4687]: > Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.283771 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-s2k8k" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.293262 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.549902 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.555967 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa2b161d-a32b-4bb8-b947-455a1f17aa59-etc-swift\") pod \"swift-storage-0\" (UID: \"fa2b161d-a32b-4bb8-b947-455a1f17aa59\") " pod="openstack/swift-storage-0" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.590475 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b6cf348-a162-431d-9399-f350f28e5b2d" containerID="7d8fa70dd0f3b77557f22a6ec2a35c17e944e5b5531022d011158beef0729f72" exitCode=0 Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.590567 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq8ml" event={"ID":"9b6cf348-a162-431d-9399-f350f28e5b2d","Type":"ContainerDied","Data":"7d8fa70dd0f3b77557f22a6ec2a35c17e944e5b5531022d011158beef0729f72"} Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.592315 4687 generic.go:334] "Generic (PLEG): container finished" podID="5df204be-f201-46a5-a5da-03a21ac1b7b0" containerID="8c0c9f182cb39763cb31ee985e82b752908184f0f6b7bb1f7553b495ec2351dc" exitCode=0 Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.592388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5czb-config-7fpg7" event={"ID":"5df204be-f201-46a5-a5da-03a21ac1b7b0","Type":"ContainerDied","Data":"8c0c9f182cb39763cb31ee985e82b752908184f0f6b7bb1f7553b495ec2351dc"} Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.727114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 14 09:17:56 crc kubenswrapper[4687]: I0314 09:17:56.733631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:56 crc kubenswrapper[4687]: W0314 09:17:56.745611 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40c58540_7bfb_429a_bce0_2231dffb158e.slice/crio-cb2b2d42b47563f67c025a3b6b9092f29ca945d5a5eed39fbf270419feb02232 WatchSource:0}: Error finding container cb2b2d42b47563f67c025a3b6b9092f29ca945d5a5eed39fbf270419feb02232: Status 404 returned error can't find the container with id cb2b2d42b47563f67c025a3b6b9092f29ca945d5a5eed39fbf270419feb02232 Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.325511 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.374825 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.604706 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerStarted","Data":"cb2b2d42b47563f67c025a3b6b9092f29ca945d5a5eed39fbf270419feb02232"} Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.606428 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"3b9a6229a45702cba8a4f4e2ae856e28376243065154fcf038de85b4c8fa943b"} Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.670519 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.890581 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k84m2"] Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.892834 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.920732 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k84m2"] Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.975384 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4x7v\" (UniqueName: \"kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.975575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.985849 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b81a-account-create-update-ktznw"] Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.987178 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.995352 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.995576 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sz9dv"] Mar 14 09:17:57 crc kubenswrapper[4687]: I0314 09:17:57.996779 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.009617 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.014621 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b81a-account-create-update-ktznw"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.033518 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sz9dv"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.079571 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.079679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.079741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr67p\" (UniqueName: \"kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.079821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4x7v\" (UniqueName: \"kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.080905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.110529 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ed35-account-create-update-hlqxb"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.115703 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.118719 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ed35-account-create-update-hlqxb"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.124496 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.150243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4x7v\" (UniqueName: \"kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v\") pod \"cinder-db-create-k84m2\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.182411 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sxhld"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.183752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.184124 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.184251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr67p\" (UniqueName: \"kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.184312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.184404 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.185921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.190563 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.191427 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.191723 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4vhwn" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.192179 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.194006 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sxhld"] Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.222986 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k84m2" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.230399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr67p\" (UniqueName: \"kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p\") pod \"cinder-b81a-account-create-update-ktznw\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286138 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64x9q\" (UniqueName: \"kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf27\" (UniqueName: \"kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.286400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.287131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.297472 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.303023 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj\") pod \"barbican-db-create-sz9dv\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.309160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.321227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sz9dv" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.387202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.387267 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.387316 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmxnk\" (UniqueName: \"kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388404 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388422 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts\") pod \"5df204be-f201-46a5-a5da-03a21ac1b7b0\" (UID: \"5df204be-f201-46a5-a5da-03a21ac1b7b0\") " Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run" (OuterVolumeSpecName: "var-run") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64x9q\" (UniqueName: \"kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf27\" (UniqueName: \"kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388939 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.388996 4687 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.389008 4687 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.389251 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.389762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.390089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts" (OuterVolumeSpecName: "scripts") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.387316 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.393062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.396857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.403651 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk" (OuterVolumeSpecName: "kube-api-access-kmxnk") pod "5df204be-f201-46a5-a5da-03a21ac1b7b0" (UID: "5df204be-f201-46a5-a5da-03a21ac1b7b0"). InnerVolumeSpecName "kube-api-access-kmxnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.406684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64x9q\" (UniqueName: \"kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q\") pod \"barbican-ed35-account-create-update-hlqxb\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.406883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf27\" (UniqueName: \"kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27\") pod \"keystone-db-sync-sxhld\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.485644 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.490456 4687 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.490495 4687 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5df204be-f201-46a5-a5da-03a21ac1b7b0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.490505 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df204be-f201-46a5-a5da-03a21ac1b7b0-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.490515 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmxnk\" (UniqueName: \"kubernetes.io/projected/5df204be-f201-46a5-a5da-03a21ac1b7b0-kube-api-access-kmxnk\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.519056 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxhld" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.616242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5czb-config-7fpg7" event={"ID":"5df204be-f201-46a5-a5da-03a21ac1b7b0","Type":"ContainerDied","Data":"e6ee1193bf7e63b62e538dd80aaf28f0c700720fc7827167f73c0aee928b22f7"} Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.616279 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ee1193bf7e63b62e538dd80aaf28f0c700720fc7827167f73c0aee928b22f7" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.616347 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5czb-config-7fpg7" Mar 14 09:17:58 crc kubenswrapper[4687]: I0314 09:17:58.959624 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq8ml" Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.104794 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwb89\" (UniqueName: \"kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89\") pod \"9b6cf348-a162-431d-9399-f350f28e5b2d\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.105167 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts\") pod \"9b6cf348-a162-431d-9399-f350f28e5b2d\" (UID: \"9b6cf348-a162-431d-9399-f350f28e5b2d\") " Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.105594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b6cf348-a162-431d-9399-f350f28e5b2d" (UID: "9b6cf348-a162-431d-9399-f350f28e5b2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.165028 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89" (OuterVolumeSpecName: "kube-api-access-nwb89") pod "9b6cf348-a162-431d-9399-f350f28e5b2d" (UID: "9b6cf348-a162-431d-9399-f350f28e5b2d"). InnerVolumeSpecName "kube-api-access-nwb89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.207027 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6cf348-a162-431d-9399-f350f28e5b2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:59 crc kubenswrapper[4687]: I0314 09:17:59.207065 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwb89\" (UniqueName: \"kubernetes.io/projected/9b6cf348-a162-431d-9399-f350f28e5b2d-kube-api-access-nwb89\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.400396 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s5czb-config-7fpg7"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.407907 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s5czb-config-7fpg7"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.441535 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k84m2"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.470852 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sz9dv"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.630362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sz9dv" event={"ID":"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2","Type":"ContainerStarted","Data":"4381843b8b1c6fc0186bd8ce155c1d32ef71f9af6f64bf43a802ae3b1ad011b9"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.635251 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k84m2" event={"ID":"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb","Type":"ContainerStarted","Data":"53d772c14c3d090ae825078160b8ccc15390d58f95d85e2b21bc59e5b2a3598f"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.642689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerStarted","Data":"afe989acce7d4913672e8658f3ad6edda80d1d6869c91b50df6cfca908e186f0"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.648576 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq8ml" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.648671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq8ml" event={"ID":"9b6cf348-a162-431d-9399-f350f28e5b2d","Type":"ContainerDied","Data":"b4391fa588f93cd92c1cd6daa837784254f3fc181b3b67ac352a96c8cd9e1af1"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.648720 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4391fa588f93cd92c1cd6daa837784254f3fc181b3b67ac352a96c8cd9e1af1" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.661925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"bd79a1857bac2adb72bd75d0225bb69854f6ad97001d5bcf08d13fbc9e6330b6"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:17:59.752651 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df204be-f201-46a5-a5da-03a21ac1b7b0" path="/var/lib/kubelet/pods/5df204be-f201-46a5-a5da-03a21ac1b7b0/volumes" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.129627 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557998-pjvpw"] Mar 14 09:18:00 crc kubenswrapper[4687]: E0314 09:18:00.129953 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6cf348-a162-431d-9399-f350f28e5b2d" containerName="mariadb-account-create-update" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.129969 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6cf348-a162-431d-9399-f350f28e5b2d" containerName="mariadb-account-create-update" Mar 14 09:18:00 crc kubenswrapper[4687]: E0314 09:18:00.129982 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df204be-f201-46a5-a5da-03a21ac1b7b0" containerName="ovn-config" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.129989 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df204be-f201-46a5-a5da-03a21ac1b7b0" containerName="ovn-config" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.130133 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df204be-f201-46a5-a5da-03a21ac1b7b0" containerName="ovn-config" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.130150 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6cf348-a162-431d-9399-f350f28e5b2d" containerName="mariadb-account-create-update" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.130680 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.135621 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.135926 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.137924 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.153491 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-pjvpw"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.239376 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xs22\" (UniqueName: \"kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22\") pod \"auto-csr-approver-29557998-pjvpw\" (UID: \"5e7baed2-71ff-425e-92e0-da1afa67a430\") " pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.343277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xs22\" (UniqueName: \"kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22\") pod \"auto-csr-approver-29557998-pjvpw\" (UID: \"5e7baed2-71ff-425e-92e0-da1afa67a430\") " pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.344663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ed35-account-create-update-hlqxb"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.352020 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b81a-account-create-update-ktznw"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.363521 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xs22\" (UniqueName: \"kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22\") pod \"auto-csr-approver-29557998-pjvpw\" (UID: \"5e7baed2-71ff-425e-92e0-da1afa67a430\") " pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.367625 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sxhld"] Mar 14 09:18:00 crc kubenswrapper[4687]: W0314 09:18:00.375191 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3748411_81a9_4a0d_b7f0_a32f77b42c48.slice/crio-835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c WatchSource:0}: Error finding container 835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c: Status 404 returned error can't find the container with id 835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.501513 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.679058 4687 generic.go:334] "Generic (PLEG): container finished" podID="5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" containerID="c5160babefb372410068b2ebbed521d748cc6f3cdc5d769e07cfccde8d39c547" exitCode=0 Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.679207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k84m2" event={"ID":"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb","Type":"ContainerDied","Data":"c5160babefb372410068b2ebbed521d748cc6f3cdc5d769e07cfccde8d39c547"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.685596 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b81a-account-create-update-ktznw" event={"ID":"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3","Type":"ContainerStarted","Data":"725c97c9e4f381679c3c182441e0be4eebd851dd53d9cb2df9dfdf8e0d835615"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.685645 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b81a-account-create-update-ktznw" event={"ID":"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3","Type":"ContainerStarted","Data":"69a4aa670c7f8ff97146e169650f8ae1de4672098caac274f588577e8c9cc21a"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.689164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxhld" event={"ID":"a3748411-81a9-4a0d-b7f0-a32f77b42c48","Type":"ContainerStarted","Data":"835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.692198 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed35-account-create-update-hlqxb" event={"ID":"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222","Type":"ContainerStarted","Data":"4e54555ffca52215c87867db8576deeeec0a5ecf10032fc54b1517593224066b"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.701352 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"dd3f3d274e27fb813def3d874c22563aa96072fd62702aecf85d88bee8b16981"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.701687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"0c071f06be75216e238efe593e4758c443ac564a9131e79924e68a537a75240f"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.701705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"75a6f02c290294deff77b4ac5a0e1a8cca1e9f1f8b685dc3c02485b1ad5d9230"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.714155 4687 generic.go:334] "Generic (PLEG): container finished" podID="41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" containerID="89abd0d7a0606cc724b4dbeeb5ab2c49e74710ccdc1053ce8a8612e8a9918d64" exitCode=0 Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.714971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sz9dv" event={"ID":"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2","Type":"ContainerDied","Data":"89abd0d7a0606cc724b4dbeeb5ab2c49e74710ccdc1053ce8a8612e8a9918d64"} Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.735829 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b81a-account-create-update-ktznw" podStartSLOduration=3.735812861 podStartE2EDuration="3.735812861s" podCreationTimestamp="2026-03-14 09:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:00.733854183 +0000 UTC m=+1265.722094558" watchObservedRunningTime="2026-03-14 09:18:00.735812861 +0000 UTC m=+1265.724053236" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.789179 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-srrwn"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.790236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.793277 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.793683 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-pr4mk" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.807191 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-srrwn"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.850027 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rm6zs"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.851239 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.867636 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rm6zs"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.876814 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f02-account-create-update-w9xnm"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.878176 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.880577 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.884607 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f02-account-create-update-w9xnm"] Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.955992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsqj\" (UniqueName: \"kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654dk\" (UniqueName: \"kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2nl\" (UniqueName: \"kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956288 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:00 crc kubenswrapper[4687]: I0314 09:18:00.956327 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654dk\" (UniqueName: \"kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058230 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2nl\" (UniqueName: \"kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058316 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058370 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfsqj\" (UniqueName: \"kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.058438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.059304 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.059354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.062170 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.062416 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.072546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.081833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654dk\" (UniqueName: \"kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk\") pod \"neutron-db-create-rm6zs\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.082194 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfsqj\" (UniqueName: \"kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj\") pod \"neutron-5f02-account-create-update-w9xnm\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.082785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2nl\" (UniqueName: \"kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl\") pod \"watcher-db-sync-srrwn\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.110133 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.147012 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s5czb" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.174158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.199267 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.399282 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-pjvpw"] Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.729462 4687 generic.go:334] "Generic (PLEG): container finished" podID="33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" containerID="725c97c9e4f381679c3c182441e0be4eebd851dd53d9cb2df9dfdf8e0d835615" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.729841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b81a-account-create-update-ktznw" event={"ID":"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3","Type":"ContainerDied","Data":"725c97c9e4f381679c3c182441e0be4eebd851dd53d9cb2df9dfdf8e0d835615"} Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.732616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed35-account-create-update-hlqxb" event={"ID":"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222","Type":"ContainerStarted","Data":"6cf2456c817df74ab9a1300cf2c8da5bfa1e0483684ddb06c9d9729455e09a9c"} Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.754974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"072a3c4fe559257cfc8e16c1cf482f750e35d374bc511a4fade50dfdc8fa7109"} Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.755026 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" event={"ID":"5e7baed2-71ff-425e-92e0-da1afa67a430","Type":"ContainerStarted","Data":"b1d99457369ffa6ca9739c7368ca81776016af999ae74c618ff725c7c580ca4f"} Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.788469 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-srrwn"] Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.793348 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ed35-account-create-update-hlqxb" podStartSLOduration=3.793310686 podStartE2EDuration="3.793310686s" podCreationTimestamp="2026-03-14 09:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:01.77887675 +0000 UTC m=+1266.767117125" watchObservedRunningTime="2026-03-14 09:18:01.793310686 +0000 UTC m=+1266.781551061" Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.915740 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rm6zs"] Mar 14 09:18:01 crc kubenswrapper[4687]: W0314 09:18:01.917109 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89dd59a1_f483_40d9_8153_d66e9bb13477.slice/crio-7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57 WatchSource:0}: Error finding container 7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57: Status 404 returned error can't find the container with id 7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57 Mar 14 09:18:01 crc kubenswrapper[4687]: W0314 09:18:01.927304 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5b0923a_08cb_49cb_b41e_7f1803315089.slice/crio-3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0 WatchSource:0}: Error finding container 3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0: Status 404 returned error can't find the container with id 3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0 Mar 14 09:18:01 crc kubenswrapper[4687]: I0314 09:18:01.937119 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f02-account-create-update-w9xnm"] Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.342867 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sz9dv" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.400385 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj\") pod \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.400739 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts\") pod \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\" (UID: \"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2\") " Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.403024 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" (UID: "41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.408811 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj" (OuterVolumeSpecName: "kube-api-access-nnmnj") pod "41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" (UID: "41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2"). InnerVolumeSpecName "kube-api-access-nnmnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.410210 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k84m2" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.502828 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4x7v\" (UniqueName: \"kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v\") pod \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.502869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts\") pod \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\" (UID: \"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb\") " Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.503173 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnmnj\" (UniqueName: \"kubernetes.io/projected/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-kube-api-access-nnmnj\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.503189 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.503703 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" (UID: "5c1cb2ae-49b5-424e-ba93-f222dab8b4cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.508918 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v" (OuterVolumeSpecName: "kube-api-access-t4x7v") pod "5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" (UID: "5c1cb2ae-49b5-424e-ba93-f222dab8b4cb"). InnerVolumeSpecName "kube-api-access-t4x7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.605205 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4x7v\" (UniqueName: \"kubernetes.io/projected/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-kube-api-access-t4x7v\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.605253 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.756130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k84m2" event={"ID":"5c1cb2ae-49b5-424e-ba93-f222dab8b4cb","Type":"ContainerDied","Data":"53d772c14c3d090ae825078160b8ccc15390d58f95d85e2b21bc59e5b2a3598f"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.756169 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d772c14c3d090ae825078160b8ccc15390d58f95d85e2b21bc59e5b2a3598f" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.756224 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k84m2" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.761046 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rm6zs" event={"ID":"89dd59a1-f483-40d9-8153-d66e9bb13477","Type":"ContainerStarted","Data":"f3877d4ac39e61968e147f16f3eca83a6892b4f7e656d7dce5b8a3abb2813663"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.761089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rm6zs" event={"ID":"89dd59a1-f483-40d9-8153-d66e9bb13477","Type":"ContainerStarted","Data":"7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.763623 4687 generic.go:334] "Generic (PLEG): container finished" podID="abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" containerID="6cf2456c817df74ab9a1300cf2c8da5bfa1e0483684ddb06c9d9729455e09a9c" exitCode=0 Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.763688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed35-account-create-update-hlqxb" event={"ID":"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222","Type":"ContainerDied","Data":"6cf2456c817df74ab9a1300cf2c8da5bfa1e0483684ddb06c9d9729455e09a9c"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.768177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"58d8f2a3dff6c32af1a70fe5bd30f501477e8bbe2129b3687be2571fa4a802f2"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.768234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"abe134ed099f299ab289c09f6b5835ed277832076bb1088a6493eec3d8b49b12"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.768253 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"907d3fca9f901755be40ad3e50715336d16e5a5038383c0c1b16a1e1533dc8bf"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.771322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-srrwn" event={"ID":"d2204f87-28ac-4294-b695-a189cbf15782","Type":"ContainerStarted","Data":"edad535f98a2310b61c777ea593a6af434dfae2bc52a1456f6be3517d9e78dae"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.772957 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sz9dv" event={"ID":"41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2","Type":"ContainerDied","Data":"4381843b8b1c6fc0186bd8ce155c1d32ef71f9af6f64bf43a802ae3b1ad011b9"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.772979 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4381843b8b1c6fc0186bd8ce155c1d32ef71f9af6f64bf43a802ae3b1ad011b9" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.773020 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sz9dv" Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.789443 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5b0923a-08cb-49cb-b41e-7f1803315089" containerID="f067b316f6639f4dd3062d198ee320f6355194821d60a7513c4688b3e7a2e447" exitCode=0 Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.789714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f02-account-create-update-w9xnm" event={"ID":"a5b0923a-08cb-49cb-b41e-7f1803315089","Type":"ContainerDied","Data":"f067b316f6639f4dd3062d198ee320f6355194821d60a7513c4688b3e7a2e447"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.789836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f02-account-create-update-w9xnm" event={"ID":"a5b0923a-08cb-49cb-b41e-7f1803315089","Type":"ContainerStarted","Data":"3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0"} Mar 14 09:18:02 crc kubenswrapper[4687]: I0314 09:18:02.792162 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-rm6zs" podStartSLOduration=2.7921413040000003 podStartE2EDuration="2.792141304s" podCreationTimestamp="2026-03-14 09:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:02.783006269 +0000 UTC m=+1267.771246644" watchObservedRunningTime="2026-03-14 09:18:02.792141304 +0000 UTC m=+1267.780381679" Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.075717 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.220323 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr67p\" (UniqueName: \"kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p\") pod \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.220396 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts\") pod \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\" (UID: \"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3\") " Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.221461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" (UID: "33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.322226 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.802642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b81a-account-create-update-ktznw" event={"ID":"33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3","Type":"ContainerDied","Data":"69a4aa670c7f8ff97146e169650f8ae1de4672098caac274f588577e8c9cc21a"} Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.802683 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a4aa670c7f8ff97146e169650f8ae1de4672098caac274f588577e8c9cc21a" Mar 14 09:18:03 crc kubenswrapper[4687]: I0314 09:18:03.802741 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b81a-account-create-update-ktznw" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.184959 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.202270 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.352944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts\") pod \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.353119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64x9q\" (UniqueName: \"kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q\") pod \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\" (UID: \"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222\") " Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.353240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfsqj\" (UniqueName: \"kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj\") pod \"a5b0923a-08cb-49cb-b41e-7f1803315089\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.353306 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts\") pod \"a5b0923a-08cb-49cb-b41e-7f1803315089\" (UID: \"a5b0923a-08cb-49cb-b41e-7f1803315089\") " Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.366594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj" (OuterVolumeSpecName: "kube-api-access-tfsqj") pod "a5b0923a-08cb-49cb-b41e-7f1803315089" (UID: "a5b0923a-08cb-49cb-b41e-7f1803315089"). InnerVolumeSpecName "kube-api-access-tfsqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.366638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q" (OuterVolumeSpecName: "kube-api-access-64x9q") pod "abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" (UID: "abe27ce9-7c2d-4c53-a5ac-9130c2f6d222"). InnerVolumeSpecName "kube-api-access-64x9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.455810 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfsqj\" (UniqueName: \"kubernetes.io/projected/a5b0923a-08cb-49cb-b41e-7f1803315089-kube-api-access-tfsqj\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.455838 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64x9q\" (UniqueName: \"kubernetes.io/projected/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-kube-api-access-64x9q\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.812709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f02-account-create-update-w9xnm" event={"ID":"a5b0923a-08cb-49cb-b41e-7f1803315089","Type":"ContainerDied","Data":"3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0"} Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.812754 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f759f83b011d7f8fe084ce884cb05e66651d95715ab7b023d8223e890858ba0" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.812815 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f02-account-create-update-w9xnm" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.815448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed35-account-create-update-hlqxb" event={"ID":"abe27ce9-7c2d-4c53-a5ac-9130c2f6d222","Type":"ContainerDied","Data":"4e54555ffca52215c87867db8576deeeec0a5ecf10032fc54b1517593224066b"} Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.815480 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e54555ffca52215c87867db8576deeeec0a5ecf10032fc54b1517593224066b" Mar 14 09:18:04 crc kubenswrapper[4687]: I0314 09:18:04.815533 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed35-account-create-update-hlqxb" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.253150 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" (UID: "abe27ce9-7c2d-4c53-a5ac-9130c2f6d222"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.253988 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.287896 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p" (OuterVolumeSpecName: "kube-api-access-zr67p") pod "33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" (UID: "33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3"). InnerVolumeSpecName "kube-api-access-zr67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.288380 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5b0923a-08cb-49cb-b41e-7f1803315089" (UID: "a5b0923a-08cb-49cb-b41e-7f1803315089"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.356092 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr67p\" (UniqueName: \"kubernetes.io/projected/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3-kube-api-access-zr67p\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.356136 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5b0923a-08cb-49cb-b41e-7f1803315089-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.870935 4687 generic.go:334] "Generic (PLEG): container finished" podID="5e7baed2-71ff-425e-92e0-da1afa67a430" containerID="3f55d235a92f9d52cf4adcf065a8a0456bd8d230d521d9b6c04f46785639ee87" exitCode=0 Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.870982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" event={"ID":"5e7baed2-71ff-425e-92e0-da1afa67a430","Type":"ContainerDied","Data":"3f55d235a92f9d52cf4adcf065a8a0456bd8d230d521d9b6c04f46785639ee87"} Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.872630 4687 generic.go:334] "Generic (PLEG): container finished" podID="89dd59a1-f483-40d9-8153-d66e9bb13477" containerID="f3877d4ac39e61968e147f16f3eca83a6892b4f7e656d7dce5b8a3abb2813663" exitCode=0 Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.872674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rm6zs" event={"ID":"89dd59a1-f483-40d9-8153-d66e9bb13477","Type":"ContainerDied","Data":"f3877d4ac39e61968e147f16f3eca83a6892b4f7e656d7dce5b8a3abb2813663"} Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.875537 4687 generic.go:334] "Generic (PLEG): container finished" podID="40c58540-7bfb-429a-bce0-2231dffb158e" containerID="afe989acce7d4913672e8658f3ad6edda80d1d6869c91b50df6cfca908e186f0" exitCode=0 Mar 14 09:18:10 crc kubenswrapper[4687]: I0314 09:18:10.875574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerDied","Data":"afe989acce7d4913672e8658f3ad6edda80d1d6869c91b50df6cfca908e186f0"} Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.756881 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.762668 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.893705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" event={"ID":"5e7baed2-71ff-425e-92e0-da1afa67a430","Type":"ContainerDied","Data":"b1d99457369ffa6ca9739c7368ca81776016af999ae74c618ff725c7c580ca4f"} Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.893762 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d99457369ffa6ca9739c7368ca81776016af999ae74c618ff725c7c580ca4f" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.893714 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-pjvpw" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.896292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rm6zs" event={"ID":"89dd59a1-f483-40d9-8153-d66e9bb13477","Type":"ContainerDied","Data":"7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57"} Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.896347 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rm6zs" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.896359 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7272b333ebdd275fe53918cab8acc3cb366b9be11fcaa0ab5e0328d5de516a57" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.909614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xs22\" (UniqueName: \"kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22\") pod \"5e7baed2-71ff-425e-92e0-da1afa67a430\" (UID: \"5e7baed2-71ff-425e-92e0-da1afa67a430\") " Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.909819 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-654dk\" (UniqueName: \"kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk\") pod \"89dd59a1-f483-40d9-8153-d66e9bb13477\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.909856 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts\") pod \"89dd59a1-f483-40d9-8153-d66e9bb13477\" (UID: \"89dd59a1-f483-40d9-8153-d66e9bb13477\") " Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.910549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89dd59a1-f483-40d9-8153-d66e9bb13477" (UID: "89dd59a1-f483-40d9-8153-d66e9bb13477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.914801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk" (OuterVolumeSpecName: "kube-api-access-654dk") pod "89dd59a1-f483-40d9-8153-d66e9bb13477" (UID: "89dd59a1-f483-40d9-8153-d66e9bb13477"). InnerVolumeSpecName "kube-api-access-654dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:12 crc kubenswrapper[4687]: I0314 09:18:12.922755 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22" (OuterVolumeSpecName: "kube-api-access-5xs22") pod "5e7baed2-71ff-425e-92e0-da1afa67a430" (UID: "5e7baed2-71ff-425e-92e0-da1afa67a430"). InnerVolumeSpecName "kube-api-access-5xs22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:13 crc kubenswrapper[4687]: I0314 09:18:13.011876 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-654dk\" (UniqueName: \"kubernetes.io/projected/89dd59a1-f483-40d9-8153-d66e9bb13477-kube-api-access-654dk\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:13 crc kubenswrapper[4687]: I0314 09:18:13.011915 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dd59a1-f483-40d9-8153-d66e9bb13477-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:13 crc kubenswrapper[4687]: I0314 09:18:13.011925 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xs22\" (UniqueName: \"kubernetes.io/projected/5e7baed2-71ff-425e-92e0-da1afa67a430-kube-api-access-5xs22\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:13 crc kubenswrapper[4687]: I0314 09:18:13.840283 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-wbxz7"] Mar 14 09:18:13 crc kubenswrapper[4687]: I0314 09:18:13.849757 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-wbxz7"] Mar 14 09:18:15 crc kubenswrapper[4687]: I0314 09:18:15.761122 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a" path="/var/lib/kubelet/pods/2f15f9e9-d95b-4b09-9ac6-a3aa446aca5a/volumes" Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.941496 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-srrwn" event={"ID":"d2204f87-28ac-4294-b695-a189cbf15782","Type":"ContainerStarted","Data":"35964c67454de0be181c6c4d08fa78944c6f263c2a376a577b5597ef99feec95"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.945144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wztwf" event={"ID":"38ba5c4b-9a2a-43ce-a6c0-f3488284929c","Type":"ContainerStarted","Data":"ff49d2ca03080a4c372d210fefc20f47528c4ef51725f4c703ecd76d60f72ad5"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.947350 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerStarted","Data":"1f737c627ffeb969cc76042890960199a65bda8afe986ce7ec64d22e7692a0d1"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.950515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxhld" event={"ID":"a3748411-81a9-4a0d-b7f0-a32f77b42c48","Type":"ContainerStarted","Data":"32cf1cac59e97922186d470fecf0224b0039b5596ebfc945daa18feecf39af21"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.957554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"ae5b233828ec19afe86977a1b29f108961c7354b4466c06403c1ccc0eac833ae"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.957851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"cf84bc15b01ffc70bca73f5f5a3431681f7065d9b7312e297646b1482fa5cc47"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.957867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"a4f56e0524e0193d5e966261abddafc18ef9708e02d8fa791b4a7e582e6c4e5b"} Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.957696 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-srrwn" podStartSLOduration=2.5836229250000002 podStartE2EDuration="17.957674116s" podCreationTimestamp="2026-03-14 09:18:00 +0000 UTC" firstStartedPulling="2026-03-14 09:18:01.803890457 +0000 UTC m=+1266.792130832" lastFinishedPulling="2026-03-14 09:18:17.177941648 +0000 UTC m=+1282.166182023" observedRunningTime="2026-03-14 09:18:17.954398366 +0000 UTC m=+1282.942638741" watchObservedRunningTime="2026-03-14 09:18:17.957674116 +0000 UTC m=+1282.945914481" Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.986959 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wztwf" podStartSLOduration=2.9549271790000002 podStartE2EDuration="41.986936938s" podCreationTimestamp="2026-03-14 09:17:36 +0000 UTC" firstStartedPulling="2026-03-14 09:17:38.147530548 +0000 UTC m=+1243.135770923" lastFinishedPulling="2026-03-14 09:18:17.179540297 +0000 UTC m=+1282.167780682" observedRunningTime="2026-03-14 09:18:17.975840204 +0000 UTC m=+1282.964080579" watchObservedRunningTime="2026-03-14 09:18:17.986936938 +0000 UTC m=+1282.975177313" Mar 14 09:18:17 crc kubenswrapper[4687]: I0314 09:18:17.999904 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sxhld" podStartSLOduration=3.201195212 podStartE2EDuration="19.999881396s" podCreationTimestamp="2026-03-14 09:17:58 +0000 UTC" firstStartedPulling="2026-03-14 09:18:00.379574161 +0000 UTC m=+1265.367814536" lastFinishedPulling="2026-03-14 09:18:17.178260335 +0000 UTC m=+1282.166500720" observedRunningTime="2026-03-14 09:18:17.999253241 +0000 UTC m=+1282.987493616" watchObservedRunningTime="2026-03-14 09:18:17.999881396 +0000 UTC m=+1282.988121781" Mar 14 09:18:18 crc kubenswrapper[4687]: I0314 09:18:18.981070 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"e7be410865763164b05304d6bcac41be961516a42bbf261ad8101dbbc09579cf"} Mar 14 09:18:18 crc kubenswrapper[4687]: I0314 09:18:18.981753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"1338bf6e2ef6036b35bb45054ea69a02e8c0b580bc5260f87fab64be1414489f"} Mar 14 09:18:18 crc kubenswrapper[4687]: I0314 09:18:18.981772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"6441e01ee96095b44487435e99d9c3635f481158bfa633db66749318f45e6810"} Mar 14 09:18:19 crc kubenswrapper[4687]: I0314 09:18:19.999383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa2b161d-a32b-4bb8-b947-455a1f17aa59","Type":"ContainerStarted","Data":"9473507bee23fac726d010465eb801a29059c0ddc5c3a560b4ea4977b6f4398b"} Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.050665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.21033295 podStartE2EDuration="57.050650153s" podCreationTimestamp="2026-03-14 09:17:23 +0000 UTC" firstStartedPulling="2026-03-14 09:17:57.337682856 +0000 UTC m=+1262.325923231" lastFinishedPulling="2026-03-14 09:18:17.178000059 +0000 UTC m=+1282.166240434" observedRunningTime="2026-03-14 09:18:20.042166364 +0000 UTC m=+1285.030406749" watchObservedRunningTime="2026-03-14 09:18:20.050650153 +0000 UTC m=+1285.038890528" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.321914 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322566 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7baed2-71ff-425e-92e0-da1afa67a430" containerName="oc" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322590 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7baed2-71ff-425e-92e0-da1afa67a430" containerName="oc" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322615 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322623 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322645 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0923a-08cb-49cb-b41e-7f1803315089" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322653 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0923a-08cb-49cb-b41e-7f1803315089" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322670 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322678 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322692 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322699 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322713 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322831 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: E0314 09:18:20.322851 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dd59a1-f483-40d9-8153-d66e9bb13477" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.322859 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dd59a1-f483-40d9-8153-d66e9bb13477" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323049 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323079 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7baed2-71ff-425e-92e0-da1afa67a430" containerName="oc" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323096 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b0923a-08cb-49cb-b41e-7f1803315089" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323116 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323129 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" containerName="mariadb-account-create-update" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323151 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="89dd59a1-f483-40d9-8153-d66e9bb13477" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.323169 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" containerName="mariadb-database-create" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.324244 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.336021 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.343641 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqkx\" (UniqueName: \"kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.442779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544431 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544517 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544589 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqkx\" (UniqueName: \"kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.544707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.545716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.546032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.546068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.546734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.546732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.578254 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqkx\" (UniqueName: \"kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx\") pod \"dnsmasq-dns-7bcd867bff-75svm\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:20 crc kubenswrapper[4687]: I0314 09:18:20.701016 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:21 crc kubenswrapper[4687]: I0314 09:18:21.011523 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerStarted","Data":"2fac31ec5a4fc0fa313cb92c0f0b2047f0a59e4fe4eb88d807cbb826d5c09748"} Mar 14 09:18:21 crc kubenswrapper[4687]: I0314 09:18:21.011860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40c58540-7bfb-429a-bce0-2231dffb158e","Type":"ContainerStarted","Data":"4d6f9cbe5c0f60536e5452a938424335066aa1f537d76ac5f0f0900788fb6375"} Mar 14 09:18:21 crc kubenswrapper[4687]: I0314 09:18:21.042756 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.042739915 podStartE2EDuration="26.042739915s" podCreationTimestamp="2026-03-14 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:21.03686239 +0000 UTC m=+1286.025102765" watchObservedRunningTime="2026-03-14 09:18:21.042739915 +0000 UTC m=+1286.030980290" Mar 14 09:18:21 crc kubenswrapper[4687]: I0314 09:18:21.146938 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:21 crc kubenswrapper[4687]: I0314 09:18:21.293697 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:22 crc kubenswrapper[4687]: I0314 09:18:22.022421 4687 generic.go:334] "Generic (PLEG): container finished" podID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerID="5ceb69180e1f77dd61933e4f6a3ea819490bc13b0231648223c4bf4c9dce0ddc" exitCode=0 Mar 14 09:18:22 crc kubenswrapper[4687]: I0314 09:18:22.022495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" event={"ID":"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c","Type":"ContainerDied","Data":"5ceb69180e1f77dd61933e4f6a3ea819490bc13b0231648223c4bf4c9dce0ddc"} Mar 14 09:18:22 crc kubenswrapper[4687]: I0314 09:18:22.024177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" event={"ID":"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c","Type":"ContainerStarted","Data":"c3af45e3b71d341c2c9624984fe398389963c3a020b54c8aa797223e5ccf759d"} Mar 14 09:18:23 crc kubenswrapper[4687]: I0314 09:18:23.035875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" event={"ID":"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c","Type":"ContainerStarted","Data":"cc33ff4dcd16cb45edb8820e860b22b690af9c5d97e9d38afc007c40b02b3ca1"} Mar 14 09:18:23 crc kubenswrapper[4687]: I0314 09:18:23.036251 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:23 crc kubenswrapper[4687]: I0314 09:18:23.058097 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" podStartSLOduration=3.058074848 podStartE2EDuration="3.058074848s" podCreationTimestamp="2026-03-14 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:23.052698066 +0000 UTC m=+1288.040938441" watchObservedRunningTime="2026-03-14 09:18:23.058074848 +0000 UTC m=+1288.046315233" Mar 14 09:18:24 crc kubenswrapper[4687]: I0314 09:18:24.046011 4687 generic.go:334] "Generic (PLEG): container finished" podID="d2204f87-28ac-4294-b695-a189cbf15782" containerID="35964c67454de0be181c6c4d08fa78944c6f263c2a376a577b5597ef99feec95" exitCode=0 Mar 14 09:18:24 crc kubenswrapper[4687]: I0314 09:18:24.046119 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-srrwn" event={"ID":"d2204f87-28ac-4294-b695-a189cbf15782","Type":"ContainerDied","Data":"35964c67454de0be181c6c4d08fa78944c6f263c2a376a577b5597ef99feec95"} Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.404396 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.435670 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data\") pod \"d2204f87-28ac-4294-b695-a189cbf15782\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.435733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv2nl\" (UniqueName: \"kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl\") pod \"d2204f87-28ac-4294-b695-a189cbf15782\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.436613 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data\") pod \"d2204f87-28ac-4294-b695-a189cbf15782\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.436657 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle\") pod \"d2204f87-28ac-4294-b695-a189cbf15782\" (UID: \"d2204f87-28ac-4294-b695-a189cbf15782\") " Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.442442 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl" (OuterVolumeSpecName: "kube-api-access-zv2nl") pod "d2204f87-28ac-4294-b695-a189cbf15782" (UID: "d2204f87-28ac-4294-b695-a189cbf15782"). InnerVolumeSpecName "kube-api-access-zv2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.448573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d2204f87-28ac-4294-b695-a189cbf15782" (UID: "d2204f87-28ac-4294-b695-a189cbf15782"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.462983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2204f87-28ac-4294-b695-a189cbf15782" (UID: "d2204f87-28ac-4294-b695-a189cbf15782"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.481207 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data" (OuterVolumeSpecName: "config-data") pod "d2204f87-28ac-4294-b695-a189cbf15782" (UID: "d2204f87-28ac-4294-b695-a189cbf15782"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.551050 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.551097 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.551112 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2204f87-28ac-4294-b695-a189cbf15782-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:25 crc kubenswrapper[4687]: I0314 09:18:25.551128 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv2nl\" (UniqueName: \"kubernetes.io/projected/d2204f87-28ac-4294-b695-a189cbf15782-kube-api-access-zv2nl\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.078062 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-srrwn" event={"ID":"d2204f87-28ac-4294-b695-a189cbf15782","Type":"ContainerDied","Data":"edad535f98a2310b61c777ea593a6af434dfae2bc52a1456f6be3517d9e78dae"} Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.078363 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edad535f98a2310b61c777ea593a6af434dfae2bc52a1456f6be3517d9e78dae" Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.078119 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-srrwn" Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.080446 4687 generic.go:334] "Generic (PLEG): container finished" podID="a3748411-81a9-4a0d-b7f0-a32f77b42c48" containerID="32cf1cac59e97922186d470fecf0224b0039b5596ebfc945daa18feecf39af21" exitCode=0 Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.080487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxhld" event={"ID":"a3748411-81a9-4a0d-b7f0-a32f77b42c48","Type":"ContainerDied","Data":"32cf1cac59e97922186d470fecf0224b0039b5596ebfc945daa18feecf39af21"} Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.294048 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:26 crc kubenswrapper[4687]: I0314 09:18:26.302726 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.092275 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.479685 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxhld" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.597826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data\") pod \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.597900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zf27\" (UniqueName: \"kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27\") pod \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.597969 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle\") pod \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\" (UID: \"a3748411-81a9-4a0d-b7f0-a32f77b42c48\") " Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.604563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27" (OuterVolumeSpecName: "kube-api-access-5zf27") pod "a3748411-81a9-4a0d-b7f0-a32f77b42c48" (UID: "a3748411-81a9-4a0d-b7f0-a32f77b42c48"). InnerVolumeSpecName "kube-api-access-5zf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.630253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3748411-81a9-4a0d-b7f0-a32f77b42c48" (UID: "a3748411-81a9-4a0d-b7f0-a32f77b42c48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.675676 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data" (OuterVolumeSpecName: "config-data") pod "a3748411-81a9-4a0d-b7f0-a32f77b42c48" (UID: "a3748411-81a9-4a0d-b7f0-a32f77b42c48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.700200 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.700256 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zf27\" (UniqueName: \"kubernetes.io/projected/a3748411-81a9-4a0d-b7f0-a32f77b42c48-kube-api-access-5zf27\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:27 crc kubenswrapper[4687]: I0314 09:18:27.700277 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3748411-81a9-4a0d-b7f0-a32f77b42c48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.098841 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxhld" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.098853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxhld" event={"ID":"a3748411-81a9-4a0d-b7f0-a32f77b42c48","Type":"ContainerDied","Data":"835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c"} Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.100236 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835acfdb84defb24805e6b0d199fdaa488e59e833e9987084ee05386ec7f055c" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.350479 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.350720 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="dnsmasq-dns" containerID="cri-o://cc33ff4dcd16cb45edb8820e860b22b690af9c5d97e9d38afc007c40b02b3ca1" gracePeriod=10 Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.351488 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.424050 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:28 crc kubenswrapper[4687]: E0314 09:18:28.424537 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3748411-81a9-4a0d-b7f0-a32f77b42c48" containerName="keystone-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.424562 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3748411-81a9-4a0d-b7f0-a32f77b42c48" containerName="keystone-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: E0314 09:18:28.424588 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2204f87-28ac-4294-b695-a189cbf15782" containerName="watcher-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.424596 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2204f87-28ac-4294-b695-a189cbf15782" containerName="watcher-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.424813 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3748411-81a9-4a0d-b7f0-a32f77b42c48" containerName="keystone-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.424831 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2204f87-28ac-4294-b695-a189cbf15782" containerName="watcher-db-sync" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.425909 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.439053 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.453440 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pfltp"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.454843 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.461048 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.461808 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.462040 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4vhwn" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.462252 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pfltp"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.462259 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.462455 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521625 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521666 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521737 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnsl\" (UniqueName: \"kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521780 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnf55\" (UniqueName: \"kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.521844 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.523172 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.524215 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.527878 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-pr4mk" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.528065 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.601155 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.627897 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8sp7\" (UniqueName: \"kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.627959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628019 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnsl\" (UniqueName: \"kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628137 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628156 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnf55\" (UniqueName: \"kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628211 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628513 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.628565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.631464 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.631723 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.640751 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.644028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.646662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.646848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.650677 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.663685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.690940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.691225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnf55\" (UniqueName: \"kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55\") pod \"dnsmasq-dns-f796d878c-46z54\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.691777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.715544 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.716715 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.725743 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnsl\" (UniqueName: \"kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl\") pod \"keystone-bootstrap-pfltp\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731177 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731378 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731430 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8sp7\" (UniqueName: \"kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ws2\" (UniqueName: \"kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.731635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.737125 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.737400 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.738131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.742342 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.764966 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.771884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8sp7\" (UniqueName: \"kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7\") pod \"watcher-applier-0\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.801543 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.803701 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.814893 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.826040 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.841455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.841873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.842007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ws2\" (UniqueName: \"kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.842121 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.842317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.842884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.846714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.857637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.866400 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.867844 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-k6kmw"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.868993 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.881680 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.882112 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.882259 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-72582" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.889125 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ws2\" (UniqueName: \"kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2\") pod \"watcher-decision-engine-0\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.896384 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.931173 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.938758 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.940374 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.946243 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.946323 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.946385 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.946431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwch2\" (UniqueName: \"kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.946498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.949317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.965269 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k6kmw"] Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.949470 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xzzd5" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.949505 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 14 09:18:28 crc kubenswrapper[4687]: I0314 09:18:28.949629 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.005966 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.017959 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-q2lvm"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.019230 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.024359 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rxpxc" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.025074 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.025202 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.029786 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q2lvm"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048646 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048711 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048747 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.048803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.052965 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053011 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053071 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwch2\" (UniqueName: \"kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053136 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8fz\" (UniqueName: \"kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9xs4\" (UniqueName: \"kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.053567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.055818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.057937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.058770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.084903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwch2\" (UniqueName: \"kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2\") pod \"watcher-api-0\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.133544 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hsvt5"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.134651 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.150153 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.150319 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cpdhf" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155630 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155693 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvb4v\" (UniqueName: \"kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md8fz\" (UniqueName: \"kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xs4\" (UniqueName: \"kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155835 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.155854 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.156293 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.157796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.158176 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.158761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.160922 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.174963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.177279 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.178477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.180965 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.192525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.196969 4687 generic.go:334] "Generic (PLEG): container finished" podID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerID="cc33ff4dcd16cb45edb8820e860b22b690af9c5d97e9d38afc007c40b02b3ca1" exitCode=0 Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.197010 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" event={"ID":"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c","Type":"ContainerDied","Data":"cc33ff4dcd16cb45edb8820e860b22b690af9c5d97e9d38afc007c40b02b3ca1"} Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.201218 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.210185 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md8fz\" (UniqueName: \"kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz\") pod \"cinder-db-sync-k6kmw\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.221463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xs4\" (UniqueName: \"kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4\") pod \"horizon-694bb57565-vwmlk\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.247392 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.248821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.267461 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269343 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8s4\" (UniqueName: \"kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269438 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269476 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.269530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvb4v\" (UniqueName: \"kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.283096 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.286991 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.308548 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.312388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hsvt5"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.334276 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvb4v\" (UniqueName: \"kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v\") pod \"neutron-db-sync-q2lvm\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.354413 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.359786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldmh\" (UniqueName: \"kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.379873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8s4\" (UniqueName: \"kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.395115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.402914 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.403910 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.408688 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.411386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.411695 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8s4\" (UniqueName: \"kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4\") pod \"barbican-db-sync-hsvt5\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.413372 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.421671 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.443374 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.451412 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-224sh"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.453037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.458370 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.462431 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.463693 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z5twx" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.476436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-224sh"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482019 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482262 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldmh\" (UniqueName: \"kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482429 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482457 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsq56\" (UniqueName: \"kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.482486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.483294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.484401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.484656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.497104 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.510903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.524502 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldmh\" (UniqueName: \"kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh\") pod \"horizon-55cf9c4f-b96rd\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.590539 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.591084 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.591178 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.591263 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.591297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.591357 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsq56\" (UniqueName: \"kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595364 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.595829 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpx48\" (UniqueName: \"kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.597458 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.597742 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.616621 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.635620 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.656016 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.656357 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.661605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsq56\" (UniqueName: \"kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.662237 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.665663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.668008 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.705607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.705716 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.705741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.705851 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.710819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.712544 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.718283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.718759 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqwq\" (UniqueName: \"kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.718917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpx48\" (UniqueName: \"kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.719612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.751955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpx48\" (UniqueName: \"kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48\") pod \"placement-db-sync-224sh\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.811051 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.814083 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825502 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqwq\" (UniqueName: \"kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.825659 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.827147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.827206 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.827408 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.827725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.829200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-224sh" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.829264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.849638 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqwq\" (UniqueName: \"kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq\") pod \"dnsmasq-dns-6f5b8b9b8c-kmghw\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.852362 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.947764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pfltp"] Mar 14 09:18:29 crc kubenswrapper[4687]: W0314 09:18:29.948533 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ee7c193_02d5_4137_87ec_8c6b43ea068d.slice/crio-0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8 WatchSource:0}: Error finding container 0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8: Status 404 returned error can't find the container with id 0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8 Mar 14 09:18:29 crc kubenswrapper[4687]: I0314 09:18:29.978923 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032161 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032515 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqkx\" (UniqueName: \"kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032545 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.032659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0\") pod \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\" (UID: \"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c\") " Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.044080 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx" (OuterVolumeSpecName: "kube-api-access-tpqkx") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "kube-api-access-tpqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.122592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.129801 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.131416 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.136314 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.136359 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpqkx\" (UniqueName: \"kubernetes.io/projected/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-kube-api-access-tpqkx\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.136370 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.137569 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.139766 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.140307 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1ec16d_f5de_454a_9f13_0bc248e30307.slice/crio-622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef WatchSource:0}: Error finding container 622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef: Status 404 returned error can't find the container with id 622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.148374 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.162916 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config" (OuterVolumeSpecName: "config") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.164481 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" (UID: "67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.218045 4687 generic.go:334] "Generic (PLEG): container finished" podID="31e4cb7e-94ea-4523-82d1-f63b8df76572" containerID="fb63c7039fd6e35b7cad93e8ed6547c4acc3210f6519fb3538748eb39c7c3041" exitCode=0 Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.218124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f796d878c-46z54" event={"ID":"31e4cb7e-94ea-4523-82d1-f63b8df76572","Type":"ContainerDied","Data":"fb63c7039fd6e35b7cad93e8ed6547c4acc3210f6519fb3538748eb39c7c3041"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.218157 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f796d878c-46z54" event={"ID":"31e4cb7e-94ea-4523-82d1-f63b8df76572","Type":"ContainerStarted","Data":"b21588dd971a30348a02fe18fcc8d6622c930cce374c57c315b514ac1b973fcf"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.229072 4687 generic.go:334] "Generic (PLEG): container finished" podID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" containerID="ff49d2ca03080a4c372d210fefc20f47528c4ef51725f4c703ecd76d60f72ad5" exitCode=0 Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.229148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wztwf" event={"ID":"38ba5c4b-9a2a-43ce-a6c0-f3488284929c","Type":"ContainerDied","Data":"ff49d2ca03080a4c372d210fefc20f47528c4ef51725f4c703ecd76d60f72ad5"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244249 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244277 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244286 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244410 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" event={"ID":"67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c","Type":"ContainerDied","Data":"c3af45e3b71d341c2c9624984fe398389963c3a020b54c8aa797223e5ccf759d"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244446 4687 scope.go:117] "RemoveContainer" containerID="cc33ff4dcd16cb45edb8820e860b22b690af9c5d97e9d38afc007c40b02b3ca1" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.244541 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd867bff-75svm" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.252299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pfltp" event={"ID":"9ee7c193-02d5-4137-87ec-8c6b43ea068d","Type":"ContainerStarted","Data":"0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.262619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerStarted","Data":"622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.267634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03feaf8d-3929-4702-b7fa-27baf0fb7ff7","Type":"ContainerStarted","Data":"765a87db3a95d383911f7e3b27c4d5c522f3b4b8826e50d99cc48313bb7fe9ca"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.269235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerStarted","Data":"5c00ab725990ec9e1090932c855d3ebd456806de4674ebb146902188b44fcecf"} Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.308920 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k6kmw"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.318381 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.327173 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bcd867bff-75svm"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.334650 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.352855 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21832052_3293_4320_aed2_58a020acb502.slice/crio-2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b WatchSource:0}: Error finding container 2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b: Status 404 returned error can't find the container with id 2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.433595 4687 scope.go:117] "RemoveContainer" containerID="5ceb69180e1f77dd61933e4f6a3ea819490bc13b0231648223c4bf4c9dce0ddc" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.693766 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hsvt5"] Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.707997 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q2lvm"] Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.737428 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ffe58c5_8c6d_4c28_9379_3e08e365adef.slice/crio-9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6 WatchSource:0}: Error finding container 9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6: Status 404 returned error can't find the container with id 9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6 Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.741697 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-224sh"] Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.746674 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e40d20_4fba_44d2_b6f9_ce8c2ac65e88.slice/crio-a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df WatchSource:0}: Error finding container a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df: Status 404 returned error can't find the container with id a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.753290 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.780957 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0363bfe_1990_4511_b951_6c7e290461c3.slice/crio-41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69 WatchSource:0}: Error finding container 41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69: Status 404 returned error can't find the container with id 41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69 Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.909097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:30 crc kubenswrapper[4687]: W0314 09:18:30.916306 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47896424_1aa3_4c3a_a1b5_b6068e158822.slice/crio-d38072c6c0f4328855d3d3f088428a602879d1a44f563c91a9dfb39e6b161992 WatchSource:0}: Error finding container d38072c6c0f4328855d3d3f088428a602879d1a44f563c91a9dfb39e6b161992: Status 404 returned error can't find the container with id d38072c6c0f4328855d3d3f088428a602879d1a44f563c91a9dfb39e6b161992 Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.922655 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:30 crc kubenswrapper[4687]: I0314 09:18:30.975427 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.085841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.085956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.085994 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.086038 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.086103 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnf55\" (UniqueName: \"kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.086123 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb\") pod \"31e4cb7e-94ea-4523-82d1-f63b8df76572\" (UID: \"31e4cb7e-94ea-4523-82d1-f63b8df76572\") " Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.111108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55" (OuterVolumeSpecName: "kube-api-access-wnf55") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "kube-api-access-wnf55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.200240 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnf55\" (UniqueName: \"kubernetes.io/projected/31e4cb7e-94ea-4523-82d1-f63b8df76572-kube-api-access-wnf55\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.220119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.228661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.289696 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.308068 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.308095 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.308442 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.313034 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.314538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config" (OuterVolumeSpecName: "config") pod "31e4cb7e-94ea-4523-82d1-f63b8df76572" (UID: "31e4cb7e-94ea-4523-82d1-f63b8df76572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.316871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k6kmw" event={"ID":"21832052-3293-4320-aed2-58a020acb502","Type":"ContainerStarted","Data":"2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.320589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" event={"ID":"47896424-1aa3-4c3a-a1b5-b6068e158822","Type":"ContainerStarted","Data":"d38072c6c0f4328855d3d3f088428a602879d1a44f563c91a9dfb39e6b161992"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.321886 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:18:31 crc kubenswrapper[4687]: E0314 09:18:31.322385 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="init" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.322409 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="init" Mar 14 09:18:31 crc kubenswrapper[4687]: E0314 09:18:31.322429 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="dnsmasq-dns" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.322437 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="dnsmasq-dns" Mar 14 09:18:31 crc kubenswrapper[4687]: E0314 09:18:31.322468 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e4cb7e-94ea-4523-82d1-f63b8df76572" containerName="init" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.322478 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e4cb7e-94ea-4523-82d1-f63b8df76572" containerName="init" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.322750 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" containerName="dnsmasq-dns" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.322780 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e4cb7e-94ea-4523-82d1-f63b8df76572" containerName="init" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.323867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q2lvm" event={"ID":"0c55cdca-7409-4935-8192-c4195a654a45","Type":"ContainerStarted","Data":"2a1ef1cd75684a1adea5392f9d083355e8093efd65c41c23db14ae43d773c3b6"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.323899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q2lvm" event={"ID":"0c55cdca-7409-4935-8192-c4195a654a45","Type":"ContainerStarted","Data":"1a33fcfb91c963f345eb09202fb215e9aa0cef6087d3ad948721a33be30a8f9f"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.324064 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.329426 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.348261 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.349944 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-q2lvm" podStartSLOduration=3.349921271 podStartE2EDuration="3.349921271s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:31.338635393 +0000 UTC m=+1296.326875768" watchObservedRunningTime="2026-03-14 09:18:31.349921271 +0000 UTC m=+1296.338161646" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.358169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerStarted","Data":"24f14c01bbf007ef33977d6e868386cad89f3e706de88f7cf9510748767da1cf"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.358217 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerStarted","Data":"c09be1d0c33e77f651c8b11603b18e1695d6904163818aca88c43a3b462f4a9e"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.359289 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.385505 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.409271 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.409234402 podStartE2EDuration="3.409234402s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:31.401315008 +0000 UTC m=+1296.389555383" watchObservedRunningTime="2026-03-14 09:18:31.409234402 +0000 UTC m=+1296.397474777" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410384 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj48l\" (UniqueName: \"kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410590 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410674 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410786 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410801 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.410814 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e4cb7e-94ea-4523-82d1-f63b8df76572-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.416111 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bb57565-vwmlk" event={"ID":"f3dfb5fd-210e-4c8e-872a-4548441b3202","Type":"ContainerStarted","Data":"b5c200f1705377f54e017797e233e6b9f151cc2f92f1a3e8081fb38596ca5289"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.418120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cf9c4f-b96rd" event={"ID":"c0363bfe-1990-4511-b951-6c7e290461c3","Type":"ContainerStarted","Data":"41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.430978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pfltp" event={"ID":"9ee7c193-02d5-4137-87ec-8c6b43ea068d","Type":"ContainerStarted","Data":"70c4f9a8b8ff077864fb7796ac21b41e81c7b418a2d12f9fd924165927898f89"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.432645 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f796d878c-46z54" event={"ID":"31e4cb7e-94ea-4523-82d1-f63b8df76572","Type":"ContainerDied","Data":"b21588dd971a30348a02fe18fcc8d6622c930cce374c57c315b514ac1b973fcf"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.432724 4687 scope.go:117] "RemoveContainer" containerID="fb63c7039fd6e35b7cad93e8ed6547c4acc3210f6519fb3538748eb39c7c3041" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.432903 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f796d878c-46z54" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.449901 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hsvt5" event={"ID":"1ffe58c5-8c6d-4c28-9379-3e08e365adef","Type":"ContainerStarted","Data":"9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.456714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerStarted","Data":"5a99a1ad3795cc0b0fb4b570b908d9c4335172b3a08edc89f7644c8d9ef54316"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.478766 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-224sh" event={"ID":"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88","Type":"ContainerStarted","Data":"a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df"} Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.499294 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pfltp" podStartSLOduration=3.499273331 podStartE2EDuration="3.499273331s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:31.455561354 +0000 UTC m=+1296.443801729" watchObservedRunningTime="2026-03-14 09:18:31.499273331 +0000 UTC m=+1296.487513706" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.516151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.516315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj48l\" (UniqueName: \"kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.516473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.516544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.516566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.529196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.530188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.531843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.535698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.536965 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.546282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj48l\" (UniqueName: \"kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l\") pod \"horizon-59cc544fdf-f92ts\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.562488 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f796d878c-46z54"] Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.683765 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.771046 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e4cb7e-94ea-4523-82d1-f63b8df76572" path="/var/lib/kubelet/pods/31e4cb7e-94ea-4523-82d1-f63b8df76572/volumes" Mar 14 09:18:31 crc kubenswrapper[4687]: I0314 09:18:31.771770 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c" path="/var/lib/kubelet/pods/67e2e2fd-2e10-4a75-bac2-6ffa4dc66c2c/volumes" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.503873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wztwf" event={"ID":"38ba5c4b-9a2a-43ce-a6c0-f3488284929c","Type":"ContainerDied","Data":"987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5"} Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.504239 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="987d5f9df24a678eaa3e59add43ea97ac9e21fa6e01f2741358017605eb7b4f5" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.509587 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api-log" containerID="cri-o://c09be1d0c33e77f651c8b11603b18e1695d6904163818aca88c43a3b462f4a9e" gracePeriod=30 Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.509881 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" containerID="cri-o://24f14c01bbf007ef33977d6e868386cad89f3e706de88f7cf9510748767da1cf" gracePeriod=30 Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.521056 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": EOF" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.578549 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wztwf" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.647044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data\") pod \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.647290 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data\") pod \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.647405 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thwxx\" (UniqueName: \"kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx\") pod \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.647517 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle\") pod \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\" (UID: \"38ba5c4b-9a2a-43ce-a6c0-f3488284929c\") " Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.659309 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "38ba5c4b-9a2a-43ce-a6c0-f3488284929c" (UID: "38ba5c4b-9a2a-43ce-a6c0-f3488284929c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.663871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx" (OuterVolumeSpecName: "kube-api-access-thwxx") pod "38ba5c4b-9a2a-43ce-a6c0-f3488284929c" (UID: "38ba5c4b-9a2a-43ce-a6c0-f3488284929c"). InnerVolumeSpecName "kube-api-access-thwxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.716505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data" (OuterVolumeSpecName: "config-data") pod "38ba5c4b-9a2a-43ce-a6c0-f3488284929c" (UID: "38ba5c4b-9a2a-43ce-a6c0-f3488284929c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.740262 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38ba5c4b-9a2a-43ce-a6c0-f3488284929c" (UID: "38ba5c4b-9a2a-43ce-a6c0-f3488284929c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.749930 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.749989 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.750004 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thwxx\" (UniqueName: \"kubernetes.io/projected/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-kube-api-access-thwxx\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:32 crc kubenswrapper[4687]: I0314 09:18:32.750016 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ba5c4b-9a2a-43ce-a6c0-f3488284929c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4687]: I0314 09:18:33.528775 4687 generic.go:334] "Generic (PLEG): container finished" podID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerID="c9f3b0de1dcf79cf5fc8f185823748d8dc912380561af1cb9f2470a1d2df7c35" exitCode=0 Mar 14 09:18:33 crc kubenswrapper[4687]: I0314 09:18:33.529065 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" event={"ID":"47896424-1aa3-4c3a-a1b5-b6068e158822","Type":"ContainerDied","Data":"c9f3b0de1dcf79cf5fc8f185823748d8dc912380561af1cb9f2470a1d2df7c35"} Mar 14 09:18:33 crc kubenswrapper[4687]: I0314 09:18:33.534413 4687 generic.go:334] "Generic (PLEG): container finished" podID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerID="c09be1d0c33e77f651c8b11603b18e1695d6904163818aca88c43a3b462f4a9e" exitCode=143 Mar 14 09:18:33 crc kubenswrapper[4687]: I0314 09:18:33.534483 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wztwf" Mar 14 09:18:33 crc kubenswrapper[4687]: I0314 09:18:33.534451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerDied","Data":"c09be1d0c33e77f651c8b11603b18e1695d6904163818aca88c43a3b462f4a9e"} Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.204222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.245602 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.275067 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:18:34 crc kubenswrapper[4687]: E0314 09:18:34.275450 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" containerName="glance-db-sync" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.275463 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" containerName="glance-db-sync" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.275642 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" containerName="glance-db-sync" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.276628 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.298986 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.383537 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414364 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414429 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414453 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppps\" (UniqueName: \"kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.414499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516316 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516482 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.516500 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppps\" (UniqueName: \"kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.517730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.517745 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.517802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.519759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.521829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.548453 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" event={"ID":"47896424-1aa3-4c3a-a1b5-b6068e158822","Type":"ContainerStarted","Data":"93e4a7f7709feddb95be48b4aa8a3e35fa3ec69254c16de36afe73e6a255794a"} Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.550146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppps\" (UniqueName: \"kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps\") pod \"dnsmasq-dns-95d6d9649-krjq9\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.550226 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.559504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerStarted","Data":"3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e"} Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.569989 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59cc544fdf-f92ts" event={"ID":"ac52992b-0253-4eb0-9ae7-248d7c44ccf3","Type":"ContainerStarted","Data":"b008802acca9c24c6dbebfa879d0e297e755a1f0073ccebddbee48695bdefd07"} Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.572413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03feaf8d-3929-4702-b7fa-27baf0fb7ff7","Type":"ContainerStarted","Data":"a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0"} Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.583991 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" podStartSLOduration=6.583945762 podStartE2EDuration="6.583945762s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:34.580695781 +0000 UTC m=+1299.568936156" watchObservedRunningTime="2026-03-14 09:18:34.583945762 +0000 UTC m=+1299.572186147" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.605444 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.016958084 podStartE2EDuration="6.605423461s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="2026-03-14 09:18:29.987629734 +0000 UTC m=+1294.975870099" lastFinishedPulling="2026-03-14 09:18:33.576095101 +0000 UTC m=+1298.564335476" observedRunningTime="2026-03-14 09:18:34.599055623 +0000 UTC m=+1299.587296018" watchObservedRunningTime="2026-03-14 09:18:34.605423461 +0000 UTC m=+1299.593663836" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.620731 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:18:34 crc kubenswrapper[4687]: I0314 09:18:34.622968 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.237597481 podStartE2EDuration="6.622948642s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="2026-03-14 09:18:30.188030713 +0000 UTC m=+1295.176271088" lastFinishedPulling="2026-03-14 09:18:33.573381874 +0000 UTC m=+1298.561622249" observedRunningTime="2026-03-14 09:18:34.618728558 +0000 UTC m=+1299.606968943" watchObservedRunningTime="2026-03-14 09:18:34.622948642 +0000 UTC m=+1299.611189017" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.091126 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.095527 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.101666 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4zr4j" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.101859 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.102326 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.111384 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233537 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5226\" (UniqueName: \"kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.233720 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.266766 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": read tcp 10.217.0.2:52406->10.217.0.155:9322: read: connection reset by peer" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.267190 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.335926 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336008 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336091 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336217 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.336913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5226\" (UniqueName: \"kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.337465 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.337498 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.343477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.345558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.356960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.360131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5226\" (UniqueName: \"kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.361894 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.368845 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.372441 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.383507 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.431174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdd29\" (UniqueName: \"kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447630 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447682 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.447714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdd29\" (UniqueName: \"kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.549982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.550102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.551320 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.552728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.553470 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.558671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.559415 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.565674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.569651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdd29\" (UniqueName: \"kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.583562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.588685 4687 generic.go:334] "Generic (PLEG): container finished" podID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerID="24f14c01bbf007ef33977d6e868386cad89f3e706de88f7cf9510748767da1cf" exitCode=0 Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.588912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerDied","Data":"24f14c01bbf007ef33977d6e868386cad89f3e706de88f7cf9510748767da1cf"} Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.589069 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" containerID="cri-o://93e4a7f7709feddb95be48b4aa8a3e35fa3ec69254c16de36afe73e6a255794a" gracePeriod=10 Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.729471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:18:35 crc kubenswrapper[4687]: I0314 09:18:35.826361 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:18:36 crc kubenswrapper[4687]: I0314 09:18:36.604208 4687 generic.go:334] "Generic (PLEG): container finished" podID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerID="93e4a7f7709feddb95be48b4aa8a3e35fa3ec69254c16de36afe73e6a255794a" exitCode=0 Mar 14 09:18:36 crc kubenswrapper[4687]: I0314 09:18:36.604290 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" event={"ID":"47896424-1aa3-4c3a-a1b5-b6068e158822","Type":"ContainerDied","Data":"93e4a7f7709feddb95be48b4aa8a3e35fa3ec69254c16de36afe73e6a255794a"} Mar 14 09:18:36 crc kubenswrapper[4687]: I0314 09:18:36.607054 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ee7c193-02d5-4137-87ec-8c6b43ea068d" containerID="70c4f9a8b8ff077864fb7796ac21b41e81c7b418a2d12f9fd924165927898f89" exitCode=0 Mar 14 09:18:36 crc kubenswrapper[4687]: I0314 09:18:36.607095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pfltp" event={"ID":"9ee7c193-02d5-4137-87ec-8c6b43ea068d","Type":"ContainerDied","Data":"70c4f9a8b8ff077864fb7796ac21b41e81c7b418a2d12f9fd924165927898f89"} Mar 14 09:18:38 crc kubenswrapper[4687]: I0314 09:18:38.932490 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 14 09:18:38 crc kubenswrapper[4687]: I0314 09:18:38.932989 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 14 09:18:38 crc kubenswrapper[4687]: I0314 09:18:38.962477 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.159261 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:39 crc kubenswrapper[4687]: E0314 09:18:39.160934 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e is running failed: container process not found" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:18:39 crc kubenswrapper[4687]: E0314 09:18:39.163822 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e is running failed: container process not found" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:18:39 crc kubenswrapper[4687]: E0314 09:18:39.164226 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e is running failed: container process not found" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:18:39 crc kubenswrapper[4687]: E0314 09:18:39.164305 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.204131 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.657145 4687 generic.go:334] "Generic (PLEG): container finished" podID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" exitCode=1 Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.657221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerDied","Data":"3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e"} Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.658510 4687 scope.go:117] "RemoveContainer" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.707671 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 14 09:18:39 crc kubenswrapper[4687]: I0314 09:18:39.754180 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.370576 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.438276 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.690544 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" containerID="cri-o://a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" gracePeriod=30 Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.722768 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.766444 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74f987fc4-zw2rw"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.784226 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.787539 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.825432 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74f987fc4-zw2rw"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.865955 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882549 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrm6\" (UniqueName: \"kubernetes.io/projected/a89460b9-5c8a-4000-ac6a-6202699a10d1-kube-api-access-qcrm6\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-config-data\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a89460b9-5c8a-4000-ac6a-6202699a10d1-logs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882674 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-scripts\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-tls-certs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-combined-ca-bundle\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.882793 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-secret-key\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.893420 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dcd9ff5b-bprxd"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.894946 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dcd9ff5b-bprxd"] Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.895030 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrm6\" (UniqueName: \"kubernetes.io/projected/a89460b9-5c8a-4000-ac6a-6202699a10d1-kube-api-access-qcrm6\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-config-data\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-config-data\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984316 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-scripts\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984352 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a89460b9-5c8a-4000-ac6a-6202699a10d1-logs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984371 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-scripts\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-combined-ca-bundle\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984419 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96cb\" (UniqueName: \"kubernetes.io/projected/00a62493-95c1-4765-8b9e-4188b68c587c-kube-api-access-p96cb\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a62493-95c1-4765-8b9e-4188b68c587c-logs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984479 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-tls-certs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-secret-key\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984533 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-tls-certs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-combined-ca-bundle\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.984575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-secret-key\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.986839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a89460b9-5c8a-4000-ac6a-6202699a10d1-logs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.987110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-scripts\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.988138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a89460b9-5c8a-4000-ac6a-6202699a10d1-config-data\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.991391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-secret-key\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.991733 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-horizon-tls-certs\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:41 crc kubenswrapper[4687]: I0314 09:18:41.994437 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89460b9-5c8a-4000-ac6a-6202699a10d1-combined-ca-bundle\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.001280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrm6\" (UniqueName: \"kubernetes.io/projected/a89460b9-5c8a-4000-ac6a-6202699a10d1-kube-api-access-qcrm6\") pod \"horizon-74f987fc4-zw2rw\" (UID: \"a89460b9-5c8a-4000-ac6a-6202699a10d1\") " pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086733 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-config-data\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-scripts\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-combined-ca-bundle\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96cb\" (UniqueName: \"kubernetes.io/projected/00a62493-95c1-4765-8b9e-4188b68c587c-kube-api-access-p96cb\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a62493-95c1-4765-8b9e-4188b68c587c-logs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.086986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-secret-key\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.087007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-tls-certs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.088788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-scripts\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.088788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a62493-95c1-4765-8b9e-4188b68c587c-logs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.088897 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00a62493-95c1-4765-8b9e-4188b68c587c-config-data\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.091848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-secret-key\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.093732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-combined-ca-bundle\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.104982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a62493-95c1-4765-8b9e-4188b68c587c-horizon-tls-certs\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.111418 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96cb\" (UniqueName: \"kubernetes.io/projected/00a62493-95c1-4765-8b9e-4188b68c587c-kube-api-access-p96cb\") pod \"horizon-7dcd9ff5b-bprxd\" (UID: \"00a62493-95c1-4765-8b9e-4188b68c587c\") " pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.126538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:18:42 crc kubenswrapper[4687]: I0314 09:18:42.219011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:18:43 crc kubenswrapper[4687]: I0314 09:18:43.712696 4687 generic.go:334] "Generic (PLEG): container finished" podID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" exitCode=0 Mar 14 09:18:43 crc kubenswrapper[4687]: I0314 09:18:43.712784 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03feaf8d-3929-4702-b7fa-27baf0fb7ff7","Type":"ContainerDied","Data":"a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0"} Mar 14 09:18:43 crc kubenswrapper[4687]: E0314 09:18:43.933208 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:43 crc kubenswrapper[4687]: E0314 09:18:43.933669 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:43 crc kubenswrapper[4687]: E0314 09:18:43.933918 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:43 crc kubenswrapper[4687]: E0314 09:18:43.933945 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:18:44 crc kubenswrapper[4687]: I0314 09:18:44.202849 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 14 09:18:45 crc kubenswrapper[4687]: I0314 09:18:45.141039 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Mar 14 09:18:47 crc kubenswrapper[4687]: E0314 09:18:47.174414 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:47 crc kubenswrapper[4687]: E0314 09:18:47.174750 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:47 crc kubenswrapper[4687]: E0314 09:18:47.174883 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n544h76h588h5c8h55bh68dh65h54ch59fh58bh644h545h675h667h688h5d6h94h5bbh9bh55h544h55hb5hbbh668h689h66fh548hdfhffh5bdh64bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rldmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-55cf9c4f-b96rd_openstack(c0363bfe-1990-4511-b951-6c7e290461c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:18:47 crc kubenswrapper[4687]: E0314 09:18:47.209960 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-55cf9c4f-b96rd" podUID="c0363bfe-1990-4511-b951-6c7e290461c3" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.322760 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.395584 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqwq\" (UniqueName: \"kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.397058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.397276 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.397573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.397979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.399035 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb\") pod \"47896424-1aa3-4c3a-a1b5-b6068e158822\" (UID: \"47896424-1aa3-4c3a-a1b5-b6068e158822\") " Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.402431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq" (OuterVolumeSpecName: "kube-api-access-jrqwq") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "kube-api-access-jrqwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.447149 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.449951 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config" (OuterVolumeSpecName: "config") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.456542 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.466938 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.491030 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47896424-1aa3-4c3a-a1b5-b6068e158822" (UID: "47896424-1aa3-4c3a-a1b5-b6068e158822"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503442 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503477 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503488 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503497 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqwq\" (UniqueName: \"kubernetes.io/projected/47896424-1aa3-4c3a-a1b5-b6068e158822-kube-api-access-jrqwq\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503506 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.503517 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47896424-1aa3-4c3a-a1b5-b6068e158822-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.754358 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.754315 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" event={"ID":"47896424-1aa3-4c3a-a1b5-b6068e158822","Type":"ContainerDied","Data":"d38072c6c0f4328855d3d3f088428a602879d1a44f563c91a9dfb39e6b161992"} Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.754809 4687 scope.go:117] "RemoveContainer" containerID="93e4a7f7709feddb95be48b4aa8a3e35fa3ec69254c16de36afe73e6a255794a" Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.814124 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:47 crc kubenswrapper[4687]: I0314 09:18:47.825632 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f5b8b9b8c-kmghw"] Mar 14 09:18:48 crc kubenswrapper[4687]: E0314 09:18:48.939019 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:48 crc kubenswrapper[4687]: E0314 09:18:48.940494 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:48 crc kubenswrapper[4687]: E0314 09:18:48.940843 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:48 crc kubenswrapper[4687]: E0314 09:18:48.940871 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:18:49 crc kubenswrapper[4687]: I0314 09:18:49.159494 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:49 crc kubenswrapper[4687]: I0314 09:18:49.202484 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 14 09:18:49 crc kubenswrapper[4687]: I0314 09:18:49.748823 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" path="/var/lib/kubelet/pods/47896424-1aa3-4c3a-a1b5-b6068e158822/volumes" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.142249 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f5b8b9b8c-kmghw" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.179634 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.179694 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.179831 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54fh59dhchbdh554h677h5dfh5c5h6fh55ch697h5cch549h6fh574h5b9h567h65fh7ch58h645h58dh84hbh655hc7h548h57dh99h5c8h64chb4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9xs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-694bb57565-vwmlk_openstack(f3dfb5fd-210e-4c8e-872a-4548441b3202): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.185350 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-694bb57565-vwmlk" podUID="f3dfb5fd-210e-4c8e-872a-4548441b3202" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.189963 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.190038 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.190204 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n687h7dhbdh96h6chfdh54fh67ch5d7h645h67h54ch545h77h8h6fhc8h6h687h544h66fh675hdfh667h8ch58dh595h95h78h555h66fh5bdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj48l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59cc544fdf-f92ts_openstack(ac52992b-0253-4eb0-9ae7-248d7c44ccf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:18:50 crc kubenswrapper[4687]: E0314 09:18:50.193514 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-59cc544fdf-f92ts" podUID="ac52992b-0253-4eb0-9ae7-248d7c44ccf3" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.293748 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363645 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363887 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363936 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxnsl\" (UniqueName: \"kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363966 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.363987 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data\") pod \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\" (UID: \"9ee7c193-02d5-4137-87ec-8c6b43ea068d\") " Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.378295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts" (OuterVolumeSpecName: "scripts") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.378435 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.379196 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl" (OuterVolumeSpecName: "kube-api-access-pxnsl") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "kube-api-access-pxnsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.381682 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.399596 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.445671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data" (OuterVolumeSpecName: "config-data") pod "9ee7c193-02d5-4137-87ec-8c6b43ea068d" (UID: "9ee7c193-02d5-4137-87ec-8c6b43ea068d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466575 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466616 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxnsl\" (UniqueName: \"kubernetes.io/projected/9ee7c193-02d5-4137-87ec-8c6b43ea068d-kube-api-access-pxnsl\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466626 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466635 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466646 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.466654 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9ee7c193-02d5-4137-87ec-8c6b43ea068d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.782070 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pfltp" event={"ID":"9ee7c193-02d5-4137-87ec-8c6b43ea068d","Type":"ContainerDied","Data":"0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8"} Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.782395 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e602457abf149c5e09edb36ac367f62963890795ce9b0c5d6a8123da1395fc8" Mar 14 09:18:50 crc kubenswrapper[4687]: I0314 09:18:50.782184 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pfltp" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.370387 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pfltp"] Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.377594 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pfltp"] Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466004 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jb4md"] Mar 14 09:18:51 crc kubenswrapper[4687]: E0314 09:18:51.466378 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466392 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" Mar 14 09:18:51 crc kubenswrapper[4687]: E0314 09:18:51.466425 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee7c193-02d5-4137-87ec-8c6b43ea068d" containerName="keystone-bootstrap" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee7c193-02d5-4137-87ec-8c6b43ea068d" containerName="keystone-bootstrap" Mar 14 09:18:51 crc kubenswrapper[4687]: E0314 09:18:51.466442 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="init" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466448 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="init" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466603 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee7c193-02d5-4137-87ec-8c6b43ea068d" containerName="keystone-bootstrap" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.466616 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="47896424-1aa3-4c3a-a1b5-b6068e158822" containerName="dnsmasq-dns" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.468585 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.470897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.470973 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4vhwn" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.471004 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.471098 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.471216 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.490444 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jb4md"] Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.501956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.502015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.502058 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.502081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cqw\" (UniqueName: \"kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.502154 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.502176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5cqw\" (UniqueName: \"kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.603927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.609403 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.609558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.610756 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.612042 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.624700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.625006 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5cqw\" (UniqueName: \"kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw\") pod \"keystone-bootstrap-jb4md\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.750345 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee7c193-02d5-4137-87ec-8c6b43ea068d" path="/var/lib/kubelet/pods/9ee7c193-02d5-4137-87ec-8c6b43ea068d/volumes" Mar 14 09:18:51 crc kubenswrapper[4687]: I0314 09:18:51.791164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:18:53 crc kubenswrapper[4687]: E0314 09:18:53.932608 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:53 crc kubenswrapper[4687]: E0314 09:18:53.933641 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:53 crc kubenswrapper[4687]: E0314 09:18:53.934102 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:53 crc kubenswrapper[4687]: E0314 09:18:53.934128 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:18:58 crc kubenswrapper[4687]: E0314 09:18:58.932704 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:58 crc kubenswrapper[4687]: E0314 09:18:58.933706 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:58 crc kubenswrapper[4687]: E0314 09:18:58.934140 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:18:58 crc kubenswrapper[4687]: E0314 09:18:58.934171 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:18:59 crc kubenswrapper[4687]: I0314 09:18:59.158826 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:59 crc kubenswrapper[4687]: I0314 09:18:59.158876 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:18:59 crc kubenswrapper[4687]: I0314 09:18:59.203749 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:19:00 crc kubenswrapper[4687]: I0314 09:19:00.882629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cf9c4f-b96rd" event={"ID":"c0363bfe-1990-4511-b951-6c7e290461c3","Type":"ContainerDied","Data":"41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69"} Mar 14 09:19:00 crc kubenswrapper[4687]: I0314 09:19:00.882947 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f6e5642dba1e60a9816f7faebab0ed652344910e0b992acc04e7c6da936d69" Mar 14 09:19:00 crc kubenswrapper[4687]: I0314 09:19:00.901691 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts\") pod \"c0363bfe-1990-4511-b951-6c7e290461c3\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001381 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data\") pod \"c0363bfe-1990-4511-b951-6c7e290461c3\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key\") pod \"c0363bfe-1990-4511-b951-6c7e290461c3\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001528 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs\") pod \"c0363bfe-1990-4511-b951-6c7e290461c3\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001771 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rldmh\" (UniqueName: \"kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh\") pod \"c0363bfe-1990-4511-b951-6c7e290461c3\" (UID: \"c0363bfe-1990-4511-b951-6c7e290461c3\") " Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001931 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts" (OuterVolumeSpecName: "scripts") pod "c0363bfe-1990-4511-b951-6c7e290461c3" (UID: "c0363bfe-1990-4511-b951-6c7e290461c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.001988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data" (OuterVolumeSpecName: "config-data") pod "c0363bfe-1990-4511-b951-6c7e290461c3" (UID: "c0363bfe-1990-4511-b951-6c7e290461c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.002260 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.002286 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0363bfe-1990-4511-b951-6c7e290461c3-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.002541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs" (OuterVolumeSpecName: "logs") pod "c0363bfe-1990-4511-b951-6c7e290461c3" (UID: "c0363bfe-1990-4511-b951-6c7e290461c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.008025 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh" (OuterVolumeSpecName: "kube-api-access-rldmh") pod "c0363bfe-1990-4511-b951-6c7e290461c3" (UID: "c0363bfe-1990-4511-b951-6c7e290461c3"). InnerVolumeSpecName "kube-api-access-rldmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.008106 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c0363bfe-1990-4511-b951-6c7e290461c3" (UID: "c0363bfe-1990-4511-b951-6c7e290461c3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.103818 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rldmh\" (UniqueName: \"kubernetes.io/projected/c0363bfe-1990-4511-b951-6c7e290461c3-kube-api-access-rldmh\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.103847 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c0363bfe-1990-4511-b951-6c7e290461c3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.103857 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0363bfe-1990-4511-b951-6c7e290461c3-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.595830 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.596122 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.596262 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.243:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gg8s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hsvt5_openstack(1ffe58c5-8c6d-4c28-9379-3e08e365adef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.597592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hsvt5" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.889570 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cf9c4f-b96rd" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.891478 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-hsvt5" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.933218 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.933271 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Mar 14 09:19:01 crc kubenswrapper[4687]: E0314 09:19:01.933408 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.243:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n655h6ch594h5f8h69h68bh7chd5h568h67fh555h655hbh699hd5h56dh55bhf9hbch5cch598h6ch574h5bfhf8h56dh65h568h667h669h6bh5cfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsq56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0b74d57b-9951-4e4d-9906-18e5ae0f4010): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.946648 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:19:01 crc kubenswrapper[4687]: I0314 09:19:01.956690 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55cf9c4f-b96rd"] Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.032868 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.037760 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.045169 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.053675 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts\") pod \"f3dfb5fd-210e-4c8e-872a-4548441b3202\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231349 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data\") pod \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231385 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data\") pod \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231444 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwch2\" (UniqueName: \"kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2\") pod \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231469 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj48l\" (UniqueName: \"kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l\") pod \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231499 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data\") pod \"f3dfb5fd-210e-4c8e-872a-4548441b3202\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231551 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data\") pod \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231606 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts\") pod \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231625 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key\") pod \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs\") pod \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\" (UID: \"ac52992b-0253-4eb0-9ae7-248d7c44ccf3\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8sp7\" (UniqueName: \"kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7\") pod \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231720 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs\") pod \"f3dfb5fd-210e-4c8e-872a-4548441b3202\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231747 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle\") pod \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231782 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs\") pod \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\" (UID: \"03feaf8d-3929-4702-b7fa-27baf0fb7ff7\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231858 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key\") pod \"f3dfb5fd-210e-4c8e-872a-4548441b3202\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs\") pod \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231907 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9xs4\" (UniqueName: \"kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4\") pod \"f3dfb5fd-210e-4c8e-872a-4548441b3202\" (UID: \"f3dfb5fd-210e-4c8e-872a-4548441b3202\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231942 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca\") pod \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.231974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle\") pod \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\" (UID: \"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e\") " Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs" (OuterVolumeSpecName: "logs") pod "03feaf8d-3929-4702-b7fa-27baf0fb7ff7" (UID: "03feaf8d-3929-4702-b7fa-27baf0fb7ff7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235656 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data" (OuterVolumeSpecName: "config-data") pod "ac52992b-0253-4eb0-9ae7-248d7c44ccf3" (UID: "ac52992b-0253-4eb0-9ae7-248d7c44ccf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs" (OuterVolumeSpecName: "logs") pod "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" (UID: "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs" (OuterVolumeSpecName: "logs") pod "ac52992b-0253-4eb0-9ae7-248d7c44ccf3" (UID: "ac52992b-0253-4eb0-9ae7-248d7c44ccf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235619 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts" (OuterVolumeSpecName: "scripts") pod "f3dfb5fd-210e-4c8e-872a-4548441b3202" (UID: "f3dfb5fd-210e-4c8e-872a-4548441b3202"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.235919 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs" (OuterVolumeSpecName: "logs") pod "f3dfb5fd-210e-4c8e-872a-4548441b3202" (UID: "f3dfb5fd-210e-4c8e-872a-4548441b3202"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.236524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data" (OuterVolumeSpecName: "config-data") pod "f3dfb5fd-210e-4c8e-872a-4548441b3202" (UID: "f3dfb5fd-210e-4c8e-872a-4548441b3202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.239214 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts" (OuterVolumeSpecName: "scripts") pod "ac52992b-0253-4eb0-9ae7-248d7c44ccf3" (UID: "ac52992b-0253-4eb0-9ae7-248d7c44ccf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.246048 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ac52992b-0253-4eb0-9ae7-248d7c44ccf3" (UID: "ac52992b-0253-4eb0-9ae7-248d7c44ccf3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.246068 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7" (OuterVolumeSpecName: "kube-api-access-b8sp7") pod "03feaf8d-3929-4702-b7fa-27baf0fb7ff7" (UID: "03feaf8d-3929-4702-b7fa-27baf0fb7ff7"). InnerVolumeSpecName "kube-api-access-b8sp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.246160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4" (OuterVolumeSpecName: "kube-api-access-z9xs4") pod "f3dfb5fd-210e-4c8e-872a-4548441b3202" (UID: "f3dfb5fd-210e-4c8e-872a-4548441b3202"). InnerVolumeSpecName "kube-api-access-z9xs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.247930 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l" (OuterVolumeSpecName: "kube-api-access-fj48l") pod "ac52992b-0253-4eb0-9ae7-248d7c44ccf3" (UID: "ac52992b-0253-4eb0-9ae7-248d7c44ccf3"). InnerVolumeSpecName "kube-api-access-fj48l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.249639 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f3dfb5fd-210e-4c8e-872a-4548441b3202" (UID: "f3dfb5fd-210e-4c8e-872a-4548441b3202"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.250688 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2" (OuterVolumeSpecName: "kube-api-access-wwch2") pod "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" (UID: "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e"). InnerVolumeSpecName "kube-api-access-wwch2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.261539 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03feaf8d-3929-4702-b7fa-27baf0fb7ff7" (UID: "03feaf8d-3929-4702-b7fa-27baf0fb7ff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.269157 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" (UID: "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.277749 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" (UID: "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.300556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data" (OuterVolumeSpecName: "config-data") pod "03feaf8d-3929-4702-b7fa-27baf0fb7ff7" (UID: "03feaf8d-3929-4702-b7fa-27baf0fb7ff7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.327528 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data" (OuterVolumeSpecName: "config-data") pod "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" (UID: "27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335270 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335298 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335311 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335322 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8sp7\" (UniqueName: \"kubernetes.io/projected/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-kube-api-access-b8sp7\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335345 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfb5fd-210e-4c8e-872a-4548441b3202-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335354 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335363 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335373 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3dfb5fd-210e-4c8e-872a-4548441b3202-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335382 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335391 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9xs4\" (UniqueName: \"kubernetes.io/projected/f3dfb5fd-210e-4c8e-872a-4548441b3202-kube-api-access-z9xs4\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335403 4687 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335412 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335422 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335430 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335439 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03feaf8d-3929-4702-b7fa-27baf0fb7ff7-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335449 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwch2\" (UniqueName: \"kubernetes.io/projected/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-kube-api-access-wwch2\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335457 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj48l\" (UniqueName: \"kubernetes.io/projected/ac52992b-0253-4eb0-9ae7-248d7c44ccf3-kube-api-access-fj48l\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335465 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3dfb5fd-210e-4c8e-872a-4548441b3202-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.335474 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.899024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03feaf8d-3929-4702-b7fa-27baf0fb7ff7","Type":"ContainerDied","Data":"765a87db3a95d383911f7e3b27c4d5c522f3b4b8826e50d99cc48313bb7fe9ca"} Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.899201 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.900727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59cc544fdf-f92ts" event={"ID":"ac52992b-0253-4eb0-9ae7-248d7c44ccf3","Type":"ContainerDied","Data":"b008802acca9c24c6dbebfa879d0e297e755a1f0073ccebddbee48695bdefd07"} Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.900855 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59cc544fdf-f92ts" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.918271 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e","Type":"ContainerDied","Data":"5c00ab725990ec9e1090932c855d3ebd456806de4674ebb146902188b44fcecf"} Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.918392 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.921536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bb57565-vwmlk" event={"ID":"f3dfb5fd-210e-4c8e-872a-4548441b3202","Type":"ContainerDied","Data":"b5c200f1705377f54e017797e233e6b9f151cc2f92f1a3e8081fb38596ca5289"} Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.921660 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bb57565-vwmlk" Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.980215 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:02 crc kubenswrapper[4687]: I0314 09:19:02.990220 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.009291 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: E0314 09:19:03.009869 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.009884 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:19:03 crc kubenswrapper[4687]: E0314 09:19:03.009898 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.009905 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" Mar 14 09:19:03 crc kubenswrapper[4687]: E0314 09:19:03.009930 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api-log" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.009937 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api-log" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.010108 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.010129 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" containerName="watcher-applier" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.010138 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api-log" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.011120 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.013097 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.055670 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.077639 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.093141 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59cc544fdf-f92ts"] Mar 14 09:19:03 crc kubenswrapper[4687]: E0314 09:19:03.097422 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac52992b_0253_4eb0_9ae7_248d7c44ccf3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e30c76_30e3_40bf_9bb2_3c5abfa6fc5e.slice/crio-5c00ab725990ec9e1090932c855d3ebd456806de4674ebb146902188b44fcecf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e30c76_30e3_40bf_9bb2_3c5abfa6fc5e.slice\": RecentStats: unable to find data in memory cache]" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.101524 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.109600 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.117556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.119310 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.121267 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.127690 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.158282 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.160314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.160524 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.160603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.160695 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pqs\" (UniqueName: \"kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.160736 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.176522 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-694bb57565-vwmlk"] Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.272440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.272611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.272683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9pp\" (UniqueName: \"kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.272777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.272920 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pqs\" (UniqueName: \"kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.273012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.273030 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.273183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.273272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.274788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.279368 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.279624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.280286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.294227 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pqs\" (UniqueName: \"kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs\") pod \"watcher-api-0\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.354092 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.374985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.375442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.375482 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9pp\" (UniqueName: \"kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.375568 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.376082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.379578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.381452 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.392278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9pp\" (UniqueName: \"kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp\") pod \"watcher-applier-0\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.442069 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.749003 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03feaf8d-3929-4702-b7fa-27baf0fb7ff7" path="/var/lib/kubelet/pods/03feaf8d-3929-4702-b7fa-27baf0fb7ff7/volumes" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.749664 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" path="/var/lib/kubelet/pods/27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e/volumes" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.750477 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac52992b-0253-4eb0-9ae7-248d7c44ccf3" path="/var/lib/kubelet/pods/ac52992b-0253-4eb0-9ae7-248d7c44ccf3/volumes" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.751417 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0363bfe-1990-4511-b951-6c7e290461c3" path="/var/lib/kubelet/pods/c0363bfe-1990-4511-b951-6c7e290461c3/volumes" Mar 14 09:19:03 crc kubenswrapper[4687]: I0314 09:19:03.751841 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dfb5fd-210e-4c8e-872a-4548441b3202" path="/var/lib/kubelet/pods/f3dfb5fd-210e-4c8e-872a-4548441b3202/volumes" Mar 14 09:19:04 crc kubenswrapper[4687]: I0314 09:19:04.204070 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="27e30c76-30e3-40bf-9bb2-3c5abfa6fc5e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:19:07 crc kubenswrapper[4687]: I0314 09:19:07.279385 4687 scope.go:117] "RemoveContainer" containerID="c9f3b0de1dcf79cf5fc8f185823748d8dc912380561af1cb9f2470a1d2df7c35" Mar 14 09:19:07 crc kubenswrapper[4687]: E0314 09:19:07.329291 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 14 09:19:07 crc kubenswrapper[4687]: E0314 09:19:07.329640 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 14 09:19:07 crc kubenswrapper[4687]: E0314 09:19:07.329785 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.243:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-k6kmw_openstack(21832052-3293-4320-aed2-58a020acb502): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:07 crc kubenswrapper[4687]: E0314 09:19:07.330970 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-k6kmw" podUID="21832052-3293-4320-aed2-58a020acb502" Mar 14 09:19:07 crc kubenswrapper[4687]: I0314 09:19:07.614471 4687 scope.go:117] "RemoveContainer" containerID="a3086e6ec811e8ee8b6ea9d63356088247362e3a74ececb96ba242a2e751adf0" Mar 14 09:19:07 crc kubenswrapper[4687]: I0314 09:19:07.653884 4687 scope.go:117] "RemoveContainer" containerID="24f14c01bbf007ef33977d6e868386cad89f3e706de88f7cf9510748767da1cf" Mar 14 09:19:07 crc kubenswrapper[4687]: I0314 09:19:07.686182 4687 scope.go:117] "RemoveContainer" containerID="c09be1d0c33e77f651c8b11603b18e1695d6904163818aca88c43a3b462f4a9e" Mar 14 09:19:07 crc kubenswrapper[4687]: I0314 09:19:07.974593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-224sh" event={"ID":"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88","Type":"ContainerStarted","Data":"2e8b97f43d874ea3f51915e5fb56d0a2ab59daae20c06d9af5c184ae558e0773"} Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.003454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerStarted","Data":"6b34699a48807d1be528b0f4a03ac614f0f1cbba9aa9ff96331416b911e21237"} Mar 14 09:19:08 crc kubenswrapper[4687]: E0314 09:19:08.014412 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-k6kmw" podUID="21832052-3293-4320-aed2-58a020acb502" Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.030210 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.031649 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-224sh" podStartSLOduration=8.877686845 podStartE2EDuration="40.031634771s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="2026-03-14 09:18:30.752523536 +0000 UTC m=+1295.740763911" lastFinishedPulling="2026-03-14 09:19:01.906471462 +0000 UTC m=+1326.894711837" observedRunningTime="2026-03-14 09:19:07.9999227 +0000 UTC m=+1332.988163075" watchObservedRunningTime="2026-03-14 09:19:08.031634771 +0000 UTC m=+1333.019875146" Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.051858 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.130281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jb4md"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.137825 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.149724 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74f987fc4-zw2rw"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.158985 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dcd9ff5b-bprxd"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.166964 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.175501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:08 crc kubenswrapper[4687]: W0314 09:19:08.250850 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod916d888b_7929_48e0_b364_7b766afdf8ac.slice/crio-657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192 WatchSource:0}: Error finding container 657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192: Status 404 returned error can't find the container with id 657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192 Mar 14 09:19:08 crc kubenswrapper[4687]: W0314 09:19:08.253300 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00a62493_95c1_4765_8b9e_4188b68c587c.slice/crio-a914a6b7589f037c93eb9744fb91bc13bc248f0099f40f05c6cfc5328723c7ae WatchSource:0}: Error finding container a914a6b7589f037c93eb9744fb91bc13bc248f0099f40f05c6cfc5328723c7ae: Status 404 returned error can't find the container with id a914a6b7589f037c93eb9744fb91bc13bc248f0099f40f05c6cfc5328723c7ae Mar 14 09:19:08 crc kubenswrapper[4687]: W0314 09:19:08.261724 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda89460b9_5c8a_4000_ac6a_6202699a10d1.slice/crio-fee52c6148a3ae6b2acadfda6d812d0aeb158bd2a3106e8a2201281c6f19ae69 WatchSource:0}: Error finding container fee52c6148a3ae6b2acadfda6d812d0aeb158bd2a3106e8a2201281c6f19ae69: Status 404 returned error can't find the container with id fee52c6148a3ae6b2acadfda6d812d0aeb158bd2a3106e8a2201281c6f19ae69 Mar 14 09:19:08 crc kubenswrapper[4687]: W0314 09:19:08.275791 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod385cd3ae_b462_4307_97b1_3e2972678525.slice/crio-e65185c94499779da473e7abee5d2ddb98687eef8973a34fb4495cf38083738e WatchSource:0}: Error finding container e65185c94499779da473e7abee5d2ddb98687eef8973a34fb4495cf38083738e: Status 404 returned error can't find the container with id e65185c94499779da473e7abee5d2ddb98687eef8973a34fb4495cf38083738e Mar 14 09:19:08 crc kubenswrapper[4687]: I0314 09:19:08.275824 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.032683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerStarted","Data":"3b28e2c7972f0348a49a125b89e9ce7199febb5215b8303d993ce48a6253a6ce"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.037951 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerStarted","Data":"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.038019 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerStarted","Data":"e65185c94499779da473e7abee5d2ddb98687eef8973a34fb4495cf38083738e"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.042025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"26839da29db3ad222343db59ee77992714fd8eae0c20574d107bd03d66a72c5e"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.042102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"fee52c6148a3ae6b2acadfda6d812d0aeb158bd2a3106e8a2201281c6f19ae69"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.047424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4afadab3-26bf-47da-9b78-cbe30944d20f","Type":"ContainerStarted","Data":"d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.047493 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4afadab3-26bf-47da-9b78-cbe30944d20f","Type":"ContainerStarted","Data":"3c90d48151eaef2249b123006c02711d9054668056e11519b31cd0b6dec29684"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.052852 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"a28ba4a2aae0e7fb4a71d4c2d549c6d6a5fe64574f63ed9fcdf8caa29cc16ff5"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.052890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"a914a6b7589f037c93eb9744fb91bc13bc248f0099f40f05c6cfc5328723c7ae"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.064200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jb4md" event={"ID":"5c9a5d82-1869-4fbd-924a-12451d765558","Type":"ContainerStarted","Data":"a69dd6b8fd4d105d4650ac42ce4aa18156d851af6acb83ed1d1698ee6d040eb5"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.064244 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jb4md" event={"ID":"5c9a5d82-1869-4fbd-924a-12451d765558","Type":"ContainerStarted","Data":"7bcb73b28e71e79bbae39d8f1a55b178d99e81b5ee4d363723d51c78afe24bb3"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.076584 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerID="458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008" exitCode=0 Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.076688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" event={"ID":"2b207391-08eb-4ce1-aebf-a49c10b21fed","Type":"ContainerDied","Data":"458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.076728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" event={"ID":"2b207391-08eb-4ce1-aebf-a49c10b21fed","Type":"ContainerStarted","Data":"fec0e867bb416ad8c665a5d31ab67ba4aad466700f324c838ebfcf4648a8be90"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.090441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerStarted","Data":"657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.101032 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jb4md" podStartSLOduration=18.101015208 podStartE2EDuration="18.101015208s" podCreationTimestamp="2026-03-14 09:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:09.095935494 +0000 UTC m=+1334.084175869" watchObservedRunningTime="2026-03-14 09:19:09.101015208 +0000 UTC m=+1334.089255583" Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.112018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerStarted","Data":"8514494b301ccfa0f2e2324278482d358e2daae67ba93feed66af9a25e62326d"} Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.127372 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.125224475 podStartE2EDuration="6.125224475s" podCreationTimestamp="2026-03-14 09:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:09.069959273 +0000 UTC m=+1334.058199648" watchObservedRunningTime="2026-03-14 09:19:09.125224475 +0000 UTC m=+1334.113464850" Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.159580 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:09 crc kubenswrapper[4687]: I0314 09:19:09.217259 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.128330 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerStarted","Data":"d13b9b86d742558e3021969d763ff7e111ebadfba47f22ede524574674be9c19"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.133478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerStarted","Data":"446541510bb6682e0db85b98f9cdb59c058a07a9181314a8a214b3cccbb6bf8d"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.136139 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerStarted","Data":"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.136612 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.148369 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"c744803c90e354e2e743f2344b9a91906ccb0cb7c96bb653a8671ee32ad69010"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.170386 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=8.170364615 podStartE2EDuration="8.170364615s" podCreationTimestamp="2026-03-14 09:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:10.153784317 +0000 UTC m=+1335.142024692" watchObservedRunningTime="2026-03-14 09:19:10.170364615 +0000 UTC m=+1335.158604990" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.189775 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dcd9ff5b-bprxd" podStartSLOduration=29.06223518 podStartE2EDuration="29.189752723s" podCreationTimestamp="2026-03-14 09:18:41 +0000 UTC" firstStartedPulling="2026-03-14 09:19:08.256018682 +0000 UTC m=+1333.244259057" lastFinishedPulling="2026-03-14 09:19:08.383536225 +0000 UTC m=+1333.371776600" observedRunningTime="2026-03-14 09:19:10.178807723 +0000 UTC m=+1335.167048098" watchObservedRunningTime="2026-03-14 09:19:10.189752723 +0000 UTC m=+1335.177993098" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.191501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"955d7b1be1230c3cd065b412e0a3d78b4e2a9b69ed622511f6312f760a1ba327"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.199510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" event={"ID":"2b207391-08eb-4ce1-aebf-a49c10b21fed","Type":"ContainerStarted","Data":"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd"} Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.199944 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.200494 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.229520 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74f987fc4-zw2rw" podStartSLOduration=29.046165804 podStartE2EDuration="29.229503273s" podCreationTimestamp="2026-03-14 09:18:41 +0000 UTC" firstStartedPulling="2026-03-14 09:19:08.268350295 +0000 UTC m=+1333.256590670" lastFinishedPulling="2026-03-14 09:19:08.451687744 +0000 UTC m=+1333.439928139" observedRunningTime="2026-03-14 09:19:10.216256137 +0000 UTC m=+1335.204496512" watchObservedRunningTime="2026-03-14 09:19:10.229503273 +0000 UTC m=+1335.217743648" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.255411 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" podStartSLOduration=36.255388441 podStartE2EDuration="36.255388441s" podCreationTimestamp="2026-03-14 09:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:10.246895832 +0000 UTC m=+1335.235136217" watchObservedRunningTime="2026-03-14 09:19:10.255388441 +0000 UTC m=+1335.243628816" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.307570 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:10 crc kubenswrapper[4687]: I0314 09:19:10.355604 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.210149 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerStarted","Data":"9730d763d639853fd830c818defcb2727d9e334926b073490f2f80aecae431cf"} Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.210605 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-log" containerID="cri-o://446541510bb6682e0db85b98f9cdb59c058a07a9181314a8a214b3cccbb6bf8d" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.211032 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-httpd" containerID="cri-o://9730d763d639853fd830c818defcb2727d9e334926b073490f2f80aecae431cf" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.214815 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-log" containerID="cri-o://d13b9b86d742558e3021969d763ff7e111ebadfba47f22ede524574674be9c19" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.215538 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerStarted","Data":"9e957185f147153546d32cb56518a5184ff153b55baa0615db38cdd9d952bd0e"} Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.216023 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-httpd" containerID="cri-o://9e957185f147153546d32cb56518a5184ff153b55baa0615db38cdd9d952bd0e" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.278307 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=37.278282282 podStartE2EDuration="37.278282282s" podCreationTimestamp="2026-03-14 09:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:11.23882513 +0000 UTC m=+1336.227065505" watchObservedRunningTime="2026-03-14 09:19:11.278282282 +0000 UTC m=+1336.266522657" Mar 14 09:19:11 crc kubenswrapper[4687]: I0314 09:19:11.290665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=37.290618147 podStartE2EDuration="37.290618147s" podCreationTimestamp="2026-03-14 09:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:11.276051348 +0000 UTC m=+1336.264291723" watchObservedRunningTime="2026-03-14 09:19:11.290618147 +0000 UTC m=+1336.278858522" Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.128845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.129129 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.219745 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.219797 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.226428 4687 generic.go:334] "Generic (PLEG): container finished" podID="916d888b-7929-48e0-b364-7b766afdf8ac" containerID="9e957185f147153546d32cb56518a5184ff153b55baa0615db38cdd9d952bd0e" exitCode=143 Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.226467 4687 generic.go:334] "Generic (PLEG): container finished" podID="916d888b-7929-48e0-b364-7b766afdf8ac" containerID="d13b9b86d742558e3021969d763ff7e111ebadfba47f22ede524574674be9c19" exitCode=143 Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.226531 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerDied","Data":"9e957185f147153546d32cb56518a5184ff153b55baa0615db38cdd9d952bd0e"} Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.226593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerDied","Data":"d13b9b86d742558e3021969d763ff7e111ebadfba47f22ede524574674be9c19"} Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.228846 4687 generic.go:334] "Generic (PLEG): container finished" podID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerID="9730d763d639853fd830c818defcb2727d9e334926b073490f2f80aecae431cf" exitCode=143 Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.228870 4687 generic.go:334] "Generic (PLEG): container finished" podID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerID="446541510bb6682e0db85b98f9cdb59c058a07a9181314a8a214b3cccbb6bf8d" exitCode=143 Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.229718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerDied","Data":"9730d763d639853fd830c818defcb2727d9e334926b073490f2f80aecae431cf"} Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.229750 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerDied","Data":"446541510bb6682e0db85b98f9cdb59c058a07a9181314a8a214b3cccbb6bf8d"} Mar 14 09:19:12 crc kubenswrapper[4687]: I0314 09:19:12.230253 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" containerID="cri-o://6b34699a48807d1be528b0f4a03ac614f0f1cbba9aa9ff96331416b911e21237" gracePeriod=30 Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.355221 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.355598 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.355668 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.475361 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.475421 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.508143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.923070 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:13 crc kubenswrapper[4687]: I0314 09:19:13.933557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.251109 4687 generic.go:334] "Generic (PLEG): container finished" podID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerID="6b34699a48807d1be528b0f4a03ac614f0f1cbba9aa9ff96331416b911e21237" exitCode=1 Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.252634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerDied","Data":"6b34699a48807d1be528b0f4a03ac614f0f1cbba9aa9ff96331416b911e21237"} Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.252683 4687 scope.go:117] "RemoveContainer" containerID="3960cfe9e23cb219e95af68664390960e5f6f64891475303e61e06d94c7b920e" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.257920 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.292670 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.487678 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504247 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504321 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504466 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504664 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdd29\" (UniqueName: \"kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504691 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.504742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs\") pod \"d627081e-8d7f-4653-809f-63f8d6e88bc2\" (UID: \"d627081e-8d7f-4653-809f-63f8d6e88bc2\") " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.508650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs" (OuterVolumeSpecName: "logs") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.509129 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.540517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29" (OuterVolumeSpecName: "kube-api-access-cdd29") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "kube-api-access-cdd29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.541431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.550541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts" (OuterVolumeSpecName: "scripts") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.585145 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607753 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607786 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdd29\" (UniqueName: \"kubernetes.io/projected/d627081e-8d7f-4653-809f-63f8d6e88bc2-kube-api-access-cdd29\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607798 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607805 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d627081e-8d7f-4653-809f-63f8d6e88bc2-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607831 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.607839 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.610789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data" (OuterVolumeSpecName: "config-data") pod "d627081e-8d7f-4653-809f-63f8d6e88bc2" (UID: "d627081e-8d7f-4653-809f-63f8d6e88bc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.634541 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.652973 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.709102 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.709131 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d627081e-8d7f-4653-809f-63f8d6e88bc2-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.744789 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:19:14 crc kubenswrapper[4687]: I0314 09:19:14.745017 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="dnsmasq-dns" containerID="cri-o://4f561719f09a1bed1bc832f22b5acc97b34eabe4761685555c11ff3e8fe23a08" gracePeriod=10 Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.269640 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.269679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d627081e-8d7f-4653-809f-63f8d6e88bc2","Type":"ContainerDied","Data":"3b28e2c7972f0348a49a125b89e9ce7199febb5215b8303d993ce48a6253a6ce"} Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.273830 4687 generic.go:334] "Generic (PLEG): container finished" podID="85071f99-e190-449d-887e-1a0ac20ca074" containerID="4f561719f09a1bed1bc832f22b5acc97b34eabe4761685555c11ff3e8fe23a08" exitCode=0 Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.274035 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" event={"ID":"85071f99-e190-449d-887e-1a0ac20ca074","Type":"ContainerDied","Data":"4f561719f09a1bed1bc832f22b5acc97b34eabe4761685555c11ff3e8fe23a08"} Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.307423 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.316407 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.330948 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:15 crc kubenswrapper[4687]: E0314 09:19:15.331300 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-httpd" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.331317 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-httpd" Mar 14 09:19:15 crc kubenswrapper[4687]: E0314 09:19:15.331354 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-log" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.331361 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-log" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.331557 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-log" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.331577 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" containerName="glance-httpd" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.332678 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.336073 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.336530 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.350962 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526462 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rck7\" (UniqueName: \"kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.526849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628506 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rck7\" (UniqueName: \"kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628584 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.628621 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.629729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.630084 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.631411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.634297 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.635074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.639156 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.639366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.663759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.681894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rck7\" (UniqueName: \"kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7\") pod \"glance-default-internal-api-0\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.750577 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d627081e-8d7f-4653-809f-63f8d6e88bc2" path="/var/lib/kubelet/pods/d627081e-8d7f-4653-809f-63f8d6e88bc2/volumes" Mar 14 09:19:15 crc kubenswrapper[4687]: I0314 09:19:15.976641 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.096864 4687 scope.go:117] "RemoveContainer" containerID="8f4267d3e2545e9e23176c353790c680d7bd27ee9596d7727dedfa495f39ed03" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.300530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"916d888b-7929-48e0-b364-7b766afdf8ac","Type":"ContainerDied","Data":"657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192"} Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.300591 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657ae3c2d125631a0191313cdcdb2088453ebfa1b689fa3d6081091623b42192" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.311918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4a1ec16d-f5de-454a-9f13-0bc248e30307","Type":"ContainerDied","Data":"622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef"} Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.311987 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622c594115b0fb30e99347dfe65e06ed6be7e654205d3cdb2823d491f8d5deef" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.328980 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.333592 4687 scope.go:117] "RemoveContainer" containerID="9730d763d639853fd830c818defcb2727d9e334926b073490f2f80aecae431cf" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.338810 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.450446 4687 scope.go:117] "RemoveContainer" containerID="446541510bb6682e0db85b98f9cdb59c058a07a9181314a8a214b3cccbb6bf8d" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461060 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca\") pod \"4a1ec16d-f5de-454a-9f13-0bc248e30307\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data\") pod \"4a1ec16d-f5de-454a-9f13-0bc248e30307\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461195 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle\") pod \"4a1ec16d-f5de-454a-9f13-0bc248e30307\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461306 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9ws2\" (UniqueName: \"kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2\") pod \"4a1ec16d-f5de-454a-9f13-0bc248e30307\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5226\" (UniqueName: \"kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461405 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461464 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461542 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs\") pod \"4a1ec16d-f5de-454a-9f13-0bc248e30307\" (UID: \"4a1ec16d-f5de-454a-9f13-0bc248e30307\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.461570 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data\") pod \"916d888b-7929-48e0-b364-7b766afdf8ac\" (UID: \"916d888b-7929-48e0-b364-7b766afdf8ac\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.462621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs" (OuterVolumeSpecName: "logs") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.465542 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.466200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs" (OuterVolumeSpecName: "logs") pod "4a1ec16d-f5de-454a-9f13-0bc248e30307" (UID: "4a1ec16d-f5de-454a-9f13-0bc248e30307"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.479468 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2" (OuterVolumeSpecName: "kube-api-access-x9ws2") pod "4a1ec16d-f5de-454a-9f13-0bc248e30307" (UID: "4a1ec16d-f5de-454a-9f13-0bc248e30307"). InnerVolumeSpecName "kube-api-access-x9ws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.498003 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts" (OuterVolumeSpecName: "scripts") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.497994 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.499040 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226" (OuterVolumeSpecName: "kube-api-access-x5226") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "kube-api-access-x5226". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.509922 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.510141 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api-log" containerID="cri-o://80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc" gracePeriod=30 Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.510262 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api" containerID="cri-o://e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4" gracePeriod=30 Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584863 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584897 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1ec16d-f5de-454a-9f13-0bc248e30307-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584924 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584936 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/916d888b-7929-48e0-b364-7b766afdf8ac-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584945 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9ws2\" (UniqueName: \"kubernetes.io/projected/4a1ec16d-f5de-454a-9f13-0bc248e30307-kube-api-access-x9ws2\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584955 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5226\" (UniqueName: \"kubernetes.io/projected/916d888b-7929-48e0-b364-7b766afdf8ac-kube-api-access-x5226\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.584963 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.595893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.620725 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1ec16d-f5de-454a-9f13-0bc248e30307" (UID: "4a1ec16d-f5de-454a-9f13-0bc248e30307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.654998 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4a1ec16d-f5de-454a-9f13-0bc248e30307" (UID: "4a1ec16d-f5de-454a-9f13-0bc248e30307"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.686969 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.687009 4687 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.687024 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.687063 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.710147 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data" (OuterVolumeSpecName: "config-data") pod "916d888b-7929-48e0-b364-7b766afdf8ac" (UID: "916d888b-7929-48e0-b364-7b766afdf8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.735576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data" (OuterVolumeSpecName: "config-data") pod "4a1ec16d-f5de-454a-9f13-0bc248e30307" (UID: "4a1ec16d-f5de-454a-9f13-0bc248e30307"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.744696 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.789460 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916d888b-7929-48e0-b364-7b766afdf8ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.789496 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.789506 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1ec16d-f5de-454a-9f13-0bc248e30307-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.892023 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb\") pod \"85071f99-e190-449d-887e-1a0ac20ca074\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.892153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc\") pod \"85071f99-e190-449d-887e-1a0ac20ca074\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.892204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config\") pod \"85071f99-e190-449d-887e-1a0ac20ca074\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.892248 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb\") pod \"85071f99-e190-449d-887e-1a0ac20ca074\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.892391 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpsxk\" (UniqueName: \"kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk\") pod \"85071f99-e190-449d-887e-1a0ac20ca074\" (UID: \"85071f99-e190-449d-887e-1a0ac20ca074\") " Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.903632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk" (OuterVolumeSpecName: "kube-api-access-zpsxk") pod "85071f99-e190-449d-887e-1a0ac20ca074" (UID: "85071f99-e190-449d-887e-1a0ac20ca074"). InnerVolumeSpecName "kube-api-access-zpsxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.976128 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85071f99-e190-449d-887e-1a0ac20ca074" (UID: "85071f99-e190-449d-887e-1a0ac20ca074"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.985745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85071f99-e190-449d-887e-1a0ac20ca074" (UID: "85071f99-e190-449d-887e-1a0ac20ca074"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.989475 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config" (OuterVolumeSpecName: "config") pod "85071f99-e190-449d-887e-1a0ac20ca074" (UID: "85071f99-e190-449d-887e-1a0ac20ca074"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.992747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85071f99-e190-449d-887e-1a0ac20ca074" (UID: "85071f99-e190-449d-887e-1a0ac20ca074"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.995163 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.995215 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.995231 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.995240 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85071f99-e190-449d-887e-1a0ac20ca074-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:17 crc kubenswrapper[4687]: I0314 09:19:17.995268 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpsxk\" (UniqueName: \"kubernetes.io/projected/85071f99-e190-449d-887e-1a0ac20ca074-kube-api-access-zpsxk\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.159186 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: W0314 09:19:18.161917 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3af59caa_4445_408d_9c7a_ffed88917fa3.slice/crio-d4fde478e2f6c4ee612d878dab57f700ef5c659ece4e46c53320fe768302452c WatchSource:0}: Error finding container d4fde478e2f6c4ee612d878dab57f700ef5c659ece4e46c53320fe768302452c: Status 404 returned error can't find the container with id d4fde478e2f6c4ee612d878dab57f700ef5c659ece4e46c53320fe768302452c Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.337557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hsvt5" event={"ID":"1ffe58c5-8c6d-4c28-9379-3e08e365adef","Type":"ContainerStarted","Data":"1798aa60f8067f36800b36906cbd2bf97ff98e72ce95873d2af8ee6339e7cbea"} Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.340146 4687 generic.go:334] "Generic (PLEG): container finished" podID="385cd3ae-b462-4307-97b1-3e2972678525" containerID="80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc" exitCode=143 Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.340229 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerDied","Data":"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc"} Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.342739 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.345966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerStarted","Data":"d4fde478e2f6c4ee612d878dab57f700ef5c659ece4e46c53320fe768302452c"} Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.351458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerStarted","Data":"cf8626fee02672c19e2ad8d4855eb1be552795c82b5a3a4c9542e44a97e90c2b"} Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.357733 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hsvt5" podStartSLOduration=3.365429804 podStartE2EDuration="50.357716562s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="2026-03-14 09:18:30.744638853 +0000 UTC m=+1295.732879238" lastFinishedPulling="2026-03-14 09:19:17.736925621 +0000 UTC m=+1342.725165996" observedRunningTime="2026-03-14 09:19:18.355789124 +0000 UTC m=+1343.344029509" watchObservedRunningTime="2026-03-14 09:19:18.357716562 +0000 UTC m=+1343.345956937" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.368629 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.369771 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.371463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f544d5-hcqft" event={"ID":"85071f99-e190-449d-887e-1a0ac20ca074","Type":"ContainerDied","Data":"e47cc2859f8b51eb5e73b9a43330415d035c318642a93cae2d403fa134bc65a1"} Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.371531 4687 scope.go:117] "RemoveContainer" containerID="4f561719f09a1bed1bc832f22b5acc97b34eabe4761685555c11ff3e8fe23a08" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.377893 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.387440 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.408941 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409358 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409377 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409392 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-httpd" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409398 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-httpd" Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409415 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409420 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409428 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="dnsmasq-dns" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409434 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="dnsmasq-dns" Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409445 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-log" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409450 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-log" Mar 14 09:19:18 crc kubenswrapper[4687]: E0314 09:19:18.409461 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="init" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409466 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="init" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409620 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409632 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="85071f99-e190-449d-887e-1a0ac20ca074" containerName="dnsmasq-dns" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409649 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" containerName="watcher-decision-engine" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409661 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-log" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.409672 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" containerName="glance-httpd" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.411070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.420396 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.429955 4687 scope.go:117] "RemoveContainer" containerID="2fdc091cb12ef27cdae416a588ee3bdd3e9c67889dc8d7d79bbd04c13ff5eb72" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.429967 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.471548 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.478866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.511670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.511817 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.511905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.511984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g6mw\" (UniqueName: \"kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.512012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.529400 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.530863 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.536943 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.537595 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.537682 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.554964 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.571118 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb4f544d5-hcqft"] Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.614940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.615085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g6mw\" (UniqueName: \"kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.615126 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.615198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.615279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.615765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.619830 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.637928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.638828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.644830 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g6mw\" (UniqueName: \"kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw\") pod \"watcher-decision-engine-0\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.653784 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:33686->10.217.0.171:9322: read: connection reset by peer" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.653865 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:33696->10.217.0.171:9322: read: connection reset by peer" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716507 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716536 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g8c\" (UniqueName: \"kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716859 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.716956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.746793 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.819844 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.819913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.819985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92g8c\" (UniqueName: \"kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.820024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.820073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.820109 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.820165 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.820251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.821403 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.821407 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.825828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.826501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.828082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.828489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.829360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.841959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92g8c\" (UniqueName: \"kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:18 crc kubenswrapper[4687]: I0314 09:19:18.910810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.072915 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.163900 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.229854 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8pqs\" (UniqueName: \"kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs\") pod \"385cd3ae-b462-4307-97b1-3e2972678525\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.230087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs\") pod \"385cd3ae-b462-4307-97b1-3e2972678525\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.230151 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca\") pod \"385cd3ae-b462-4307-97b1-3e2972678525\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.230181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data\") pod \"385cd3ae-b462-4307-97b1-3e2972678525\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.230246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle\") pod \"385cd3ae-b462-4307-97b1-3e2972678525\" (UID: \"385cd3ae-b462-4307-97b1-3e2972678525\") " Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.231627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs" (OuterVolumeSpecName: "logs") pod "385cd3ae-b462-4307-97b1-3e2972678525" (UID: "385cd3ae-b462-4307-97b1-3e2972678525"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.237725 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs" (OuterVolumeSpecName: "kube-api-access-h8pqs") pod "385cd3ae-b462-4307-97b1-3e2972678525" (UID: "385cd3ae-b462-4307-97b1-3e2972678525"). InnerVolumeSpecName "kube-api-access-h8pqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.290641 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "385cd3ae-b462-4307-97b1-3e2972678525" (UID: "385cd3ae-b462-4307-97b1-3e2972678525"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.307977 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "385cd3ae-b462-4307-97b1-3e2972678525" (UID: "385cd3ae-b462-4307-97b1-3e2972678525"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.326885 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.334087 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385cd3ae-b462-4307-97b1-3e2972678525-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.334127 4687 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.334142 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.334159 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8pqs\" (UniqueName: \"kubernetes.io/projected/385cd3ae-b462-4307-97b1-3e2972678525-kube-api-access-h8pqs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:19 crc kubenswrapper[4687]: W0314 09:19:19.355103 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeee82ec5_3847_4115_ac3c_5d9590930169.slice/crio-d046b164ff0e22529192f37cb782838563bf4d30b295673ad9aa8bc35b98b3dd WatchSource:0}: Error finding container d046b164ff0e22529192f37cb782838563bf4d30b295673ad9aa8bc35b98b3dd: Status 404 returned error can't find the container with id d046b164ff0e22529192f37cb782838563bf4d30b295673ad9aa8bc35b98b3dd Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.369532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data" (OuterVolumeSpecName: "config-data") pod "385cd3ae-b462-4307-97b1-3e2972678525" (UID: "385cd3ae-b462-4307-97b1-3e2972678525"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.400400 4687 generic.go:334] "Generic (PLEG): container finished" podID="82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" containerID="2e8b97f43d874ea3f51915e5fb56d0a2ab59daae20c06d9af5c184ae558e0773" exitCode=0 Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.400454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-224sh" event={"ID":"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88","Type":"ContainerDied","Data":"2e8b97f43d874ea3f51915e5fb56d0a2ab59daae20c06d9af5c184ae558e0773"} Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.402902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerStarted","Data":"4915c64d332bdf697424c821840524e1b7ace5747307b878c04490adb0b89408"} Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.407560 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerStarted","Data":"d046b164ff0e22529192f37cb782838563bf4d30b295673ad9aa8bc35b98b3dd"} Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.410835 4687 generic.go:334] "Generic (PLEG): container finished" podID="385cd3ae-b462-4307-97b1-3e2972678525" containerID="e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4" exitCode=0 Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.410866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerDied","Data":"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4"} Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.410884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"385cd3ae-b462-4307-97b1-3e2972678525","Type":"ContainerDied","Data":"e65185c94499779da473e7abee5d2ddb98687eef8973a34fb4495cf38083738e"} Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.410901 4687 scope.go:117] "RemoveContainer" containerID="e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.411016 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.437414 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385cd3ae-b462-4307-97b1-3e2972678525-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.602321 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.613515 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.625364 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:19 crc kubenswrapper[4687]: E0314 09:19:19.625784 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.625806 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api" Mar 14 09:19:19 crc kubenswrapper[4687]: E0314 09:19:19.625834 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api-log" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.625841 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api-log" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.626035 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.626057 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="385cd3ae-b462-4307-97b1-3e2972678525" containerName="watcher-api-log" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.628069 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.629317 4687 scope.go:117] "RemoveContainer" containerID="80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.632156 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.632436 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.632585 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.660242 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.717554 4687 scope.go:117] "RemoveContainer" containerID="e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4" Mar 14 09:19:19 crc kubenswrapper[4687]: E0314 09:19:19.718093 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4\": container with ID starting with e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4 not found: ID does not exist" containerID="e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.718137 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4"} err="failed to get container status \"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4\": rpc error: code = NotFound desc = could not find container \"e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4\": container with ID starting with e2f8098d9a228a50ca44dc50abb34496aed852fd564d0e5f8be19b38bad0c8a4 not found: ID does not exist" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.718163 4687 scope.go:117] "RemoveContainer" containerID="80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc" Mar 14 09:19:19 crc kubenswrapper[4687]: E0314 09:19:19.720741 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc\": container with ID starting with 80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc not found: ID does not exist" containerID="80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.720794 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc"} err="failed to get container status \"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc\": rpc error: code = NotFound desc = could not find container \"80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc\": container with ID starting with 80d071b0b681bbe612f0b24dbeb11eeef033865ef4c11f6b4262d66a6678c0dc not found: ID does not exist" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.743142 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzlw8\" (UniqueName: \"kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.745833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.745869 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.746029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.747676 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.747747 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.747883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.758197 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385cd3ae-b462-4307-97b1-3e2972678525" path="/var/lib/kubelet/pods/385cd3ae-b462-4307-97b1-3e2972678525/volumes" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.761003 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1ec16d-f5de-454a-9f13-0bc248e30307" path="/var/lib/kubelet/pods/4a1ec16d-f5de-454a-9f13-0bc248e30307/volumes" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.762276 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85071f99-e190-449d-887e-1a0ac20ca074" path="/var/lib/kubelet/pods/85071f99-e190-449d-887e-1a0ac20ca074/volumes" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.763723 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916d888b-7929-48e0-b364-7b766afdf8ac" path="/var/lib/kubelet/pods/916d888b-7929-48e0-b364-7b766afdf8ac/volumes" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.849675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.849734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.849789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.849842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.849889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzlw8\" (UniqueName: \"kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.850541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.850573 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.850935 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.856087 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.856576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.856976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.857115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.857961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.868961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzlw8\" (UniqueName: \"kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8\") pod \"watcher-api-0\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " pod="openstack/watcher-api-0" Mar 14 09:19:19 crc kubenswrapper[4687]: I0314 09:19:19.967605 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.003251 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:20 crc kubenswrapper[4687]: W0314 09:19:20.020177 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod031bbc58_0099_459e_836e_e1c58bd86f4a.slice/crio-5680101488bc53437f606ecbe80e1d73530e5aecd97732a05b6efc986b959764 WatchSource:0}: Error finding container 5680101488bc53437f606ecbe80e1d73530e5aecd97732a05b6efc986b959764: Status 404 returned error can't find the container with id 5680101488bc53437f606ecbe80e1d73530e5aecd97732a05b6efc986b959764 Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.275544 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:20 crc kubenswrapper[4687]: W0314 09:19:20.286591 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod266d9643_02bf_4a10_b3ba_fa6706150eb3.slice/crio-c71c9f3f61cc72a12e1bc1bdf6455213b52bfa9a9f4bf75891e403fb844da31e WatchSource:0}: Error finding container c71c9f3f61cc72a12e1bc1bdf6455213b52bfa9a9f4bf75891e403fb844da31e: Status 404 returned error can't find the container with id c71c9f3f61cc72a12e1bc1bdf6455213b52bfa9a9f4bf75891e403fb844da31e Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.446273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerStarted","Data":"5680101488bc53437f606ecbe80e1d73530e5aecd97732a05b6efc986b959764"} Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.452812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerStarted","Data":"c71c9f3f61cc72a12e1bc1bdf6455213b52bfa9a9f4bf75891e403fb844da31e"} Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.467416 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerStarted","Data":"96d1c1bfa20a063ccb43bde90999f15ab5163e4da6555f6328a45698e8a1db72"} Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.480520 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerStarted","Data":"d0b04688bd212e1bc8da56d39a87a4b30639d095963eaa90585097c1419b8958"} Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.520471 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.520450368 podStartE2EDuration="2.520450368s" podCreationTimestamp="2026-03-14 09:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:20.515402533 +0000 UTC m=+1345.503642908" watchObservedRunningTime="2026-03-14 09:19:20.520450368 +0000 UTC m=+1345.508690743" Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.522384 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.522372595 podStartE2EDuration="5.522372595s" podCreationTimestamp="2026-03-14 09:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:20.500097816 +0000 UTC m=+1345.488338191" watchObservedRunningTime="2026-03-14 09:19:20.522372595 +0000 UTC m=+1345.510612970" Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.849917 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-224sh" Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.982176 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts\") pod \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.982312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle\") pod \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.982454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data\") pod \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.982492 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpx48\" (UniqueName: \"kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48\") pod \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.982538 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs\") pod \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\" (UID: \"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88\") " Mar 14 09:19:20 crc kubenswrapper[4687]: I0314 09:19:20.986965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs" (OuterVolumeSpecName: "logs") pod "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" (UID: "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:20.998936 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts" (OuterVolumeSpecName: "scripts") pod "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" (UID: "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.021918 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48" (OuterVolumeSpecName: "kube-api-access-cpx48") pod "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" (UID: "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88"). InnerVolumeSpecName "kube-api-access-cpx48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.050781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data" (OuterVolumeSpecName: "config-data") pod "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" (UID: "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.056681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" (UID: "82e40d20-4fba-44d2-b6f9-ce8c2ac65e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.084746 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.084786 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpx48\" (UniqueName: \"kubernetes.io/projected/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-kube-api-access-cpx48\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.084835 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.084844 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.084853 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.507690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerStarted","Data":"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc"} Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.509772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-224sh" event={"ID":"82e40d20-4fba-44d2-b6f9-ce8c2ac65e88","Type":"ContainerDied","Data":"a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df"} Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.509814 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b3dcfaa4a323dd4096bb04672003a8b95d76d3d4ca547dd290b90c1fd880df" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.509900 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-224sh" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.539878 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerStarted","Data":"819059fba7527cc779425d8dc2093dff53db9b3069ed9d5f60b6474530bea4c4"} Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.540181 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerStarted","Data":"069f96fb16f54c0be976c0cd8a345e3e2baf9f2e59692132df60655006fe1445"} Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.540720 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.550629 4687 generic.go:334] "Generic (PLEG): container finished" podID="5c9a5d82-1869-4fbd-924a-12451d765558" containerID="a69dd6b8fd4d105d4650ac42ce4aa18156d851af6acb83ed1d1698ee6d040eb5" exitCode=0 Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.551401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jb4md" event={"ID":"5c9a5d82-1869-4fbd-924a-12451d765558","Type":"ContainerDied","Data":"a69dd6b8fd4d105d4650ac42ce4aa18156d851af6acb83ed1d1698ee6d040eb5"} Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.595069 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:19:21 crc kubenswrapper[4687]: E0314 09:19:21.607726 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" containerName="placement-db-sync" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.607792 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" containerName="placement-db-sync" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.608002 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" containerName="placement-db-sync" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.609072 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.614773 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.614989 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.615091 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.615600 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.615764 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z5twx" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.670471 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.688194 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.688173279 podStartE2EDuration="2.688173279s" podCreationTimestamp="2026-03-14 09:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:21.601577574 +0000 UTC m=+1346.589817949" watchObservedRunningTime="2026-03-14 09:19:21.688173279 +0000 UTC m=+1346.676413654" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64jc\" (UniqueName: \"kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704385 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.704405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807378 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64jc\" (UniqueName: \"kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807767 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.807899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.808289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.814995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.821472 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.828658 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.833970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.834539 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64jc\" (UniqueName: \"kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:21 crc kubenswrapper[4687]: I0314 09:19:21.837964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs\") pod \"placement-5f7cf44c9b-7cnmp\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.003491 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.129031 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.231263 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.169:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.169:8443: connect: connection refused" Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.559890 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:19:22 crc kubenswrapper[4687]: W0314 09:19:22.564765 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod944ab990_3a74_471a_b889_6992cdd509b7.slice/crio-dd798f6a8c148d73c57d4cc1509bcdb78986505d887cc22a5c3d316043a2be9f WatchSource:0}: Error finding container dd798f6a8c148d73c57d4cc1509bcdb78986505d887cc22a5c3d316043a2be9f: Status 404 returned error can't find the container with id dd798f6a8c148d73c57d4cc1509bcdb78986505d887cc22a5c3d316043a2be9f Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.571952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerStarted","Data":"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74"} Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.575899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k6kmw" event={"ID":"21832052-3293-4320-aed2-58a020acb502","Type":"ContainerStarted","Data":"93eb5b6111a0bc45abc0164906acf568235ca1195233623645c8313217837de5"} Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.595094 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-k6kmw" podStartSLOduration=3.919878669 podStartE2EDuration="54.595075802s" podCreationTimestamp="2026-03-14 09:18:28 +0000 UTC" firstStartedPulling="2026-03-14 09:18:30.358490385 +0000 UTC m=+1295.346730760" lastFinishedPulling="2026-03-14 09:19:21.033687518 +0000 UTC m=+1346.021927893" observedRunningTime="2026-03-14 09:19:22.59214242 +0000 UTC m=+1347.580382795" watchObservedRunningTime="2026-03-14 09:19:22.595075802 +0000 UTC m=+1347.583316177" Mar 14 09:19:22 crc kubenswrapper[4687]: I0314 09:19:22.960039 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.057718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5cqw\" (UniqueName: \"kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.058084 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.058113 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.058160 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.058385 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.058410 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys\") pod \"5c9a5d82-1869-4fbd-924a-12451d765558\" (UID: \"5c9a5d82-1869-4fbd-924a-12451d765558\") " Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.064455 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.064550 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw" (OuterVolumeSpecName: "kube-api-access-x5cqw") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "kube-api-access-x5cqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.064795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.068489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts" (OuterVolumeSpecName: "scripts") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.091570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.103458 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data" (OuterVolumeSpecName: "config-data") pod "5c9a5d82-1869-4fbd-924a-12451d765558" (UID: "5c9a5d82-1869-4fbd-924a-12451d765558"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160619 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5cqw\" (UniqueName: \"kubernetes.io/projected/5c9a5d82-1869-4fbd-924a-12451d765558-kube-api-access-x5cqw\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160659 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160671 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160681 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160688 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.160696 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c9a5d82-1869-4fbd-924a-12451d765558-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.630917 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerStarted","Data":"e9022b45966226831f90b1e9c94bad3b1e0b94a15e818bcc823da46d2f3ef897"} Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.631255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerStarted","Data":"cab98d08164cf4c2aec6e204485abc621dfc232bf9af36cf4f62727915dce228"} Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.631272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerStarted","Data":"dd798f6a8c148d73c57d4cc1509bcdb78986505d887cc22a5c3d316043a2be9f"} Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.631310 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.631353 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.663861 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jb4md" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.667646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jb4md" event={"ID":"5c9a5d82-1869-4fbd-924a-12451d765558","Type":"ContainerDied","Data":"7bcb73b28e71e79bbae39d8f1a55b178d99e81b5ee4d363723d51c78afe24bb3"} Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.667697 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcb73b28e71e79bbae39d8f1a55b178d99e81b5ee4d363723d51c78afe24bb3" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.678304 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f7cf44c9b-7cnmp" podStartSLOduration=2.67828228 podStartE2EDuration="2.67828228s" podCreationTimestamp="2026-03-14 09:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:23.668569931 +0000 UTC m=+1348.656810306" watchObservedRunningTime="2026-03-14 09:19:23.67828228 +0000 UTC m=+1348.666522655" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.734130 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.734094805 podStartE2EDuration="5.734094805s" podCreationTimestamp="2026-03-14 09:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:23.698807326 +0000 UTC m=+1348.687047701" watchObservedRunningTime="2026-03-14 09:19:23.734094805 +0000 UTC m=+1348.722335180" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.789211 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7978d574c6-8llvn"] Mar 14 09:19:23 crc kubenswrapper[4687]: E0314 09:19:23.789879 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9a5d82-1869-4fbd-924a-12451d765558" containerName="keystone-bootstrap" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.789899 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9a5d82-1869-4fbd-924a-12451d765558" containerName="keystone-bootstrap" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.790165 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9a5d82-1869-4fbd-924a-12451d765558" containerName="keystone-bootstrap" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.791462 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.795121 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.798776 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.798983 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.799141 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.799258 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4vhwn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.799417 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.803361 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7978d574c6-8llvn"] Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.877449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-fernet-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.877666 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f86z\" (UniqueName: \"kubernetes.io/projected/5046ed05-72aa-4064-a9fd-940663b5844b-kube-api-access-6f86z\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.877765 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-config-data\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.877808 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-combined-ca-bundle\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.877927 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-scripts\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.878031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-credential-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.878069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-internal-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.878106 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-public-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-config-data\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-combined-ca-bundle\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-scripts\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-credential-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-internal-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-public-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980423 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-fernet-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.980473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f86z\" (UniqueName: \"kubernetes.io/projected/5046ed05-72aa-4064-a9fd-940663b5844b-kube-api-access-6f86z\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.985283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-credential-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.985384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-public-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.987163 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-config-data\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.992066 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-combined-ca-bundle\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:23 crc kubenswrapper[4687]: I0314 09:19:23.996289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-internal-tls-certs\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.001534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-scripts\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.005802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5046ed05-72aa-4064-a9fd-940663b5844b-fernet-keys\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.006232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f86z\" (UniqueName: \"kubernetes.io/projected/5046ed05-72aa-4064-a9fd-940663b5844b-kube-api-access-6f86z\") pod \"keystone-7978d574c6-8llvn\" (UID: \"5046ed05-72aa-4064-a9fd-940663b5844b\") " pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.112775 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.113065 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.116251 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.548791 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c699fbccb-rdl22"] Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.551514 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.575785 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c699fbccb-rdl22"] Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.594377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbks\" (UniqueName: \"kubernetes.io/projected/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-kube-api-access-5bbks\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595041 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-scripts\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595104 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-logs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595245 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-internal-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-combined-ca-bundle\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-config-data\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.595501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-public-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.702930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-internal-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703190 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-combined-ca-bundle\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703211 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-config-data\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-public-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703316 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbks\" (UniqueName: \"kubernetes.io/projected/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-kube-api-access-5bbks\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703367 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-scripts\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703398 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-logs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.703762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-logs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.710943 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-scripts\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.711258 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-config-data\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.711312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-public-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.711342 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-combined-ca-bundle\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.711775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-internal-tls-certs\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.729955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbks\" (UniqueName: \"kubernetes.io/projected/e3088e0c-b79c-41f3-9fc6-ef0d797943e0-kube-api-access-5bbks\") pod \"placement-6c699fbccb-rdl22\" (UID: \"e3088e0c-b79c-41f3-9fc6-ef0d797943e0\") " pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.757564 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7978d574c6-8llvn"] Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.890771 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.968558 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:24 crc kubenswrapper[4687]: I0314 09:19:24.968670 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.585380 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.693039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7978d574c6-8llvn" event={"ID":"5046ed05-72aa-4064-a9fd-940663b5844b","Type":"ContainerStarted","Data":"39f6fe18e5636eff82c1673067de6b43e2c6fdd12c406c3539f71999e9884ff8"} Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.693086 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7978d574c6-8llvn" event={"ID":"5046ed05-72aa-4064-a9fd-940663b5844b","Type":"ContainerStarted","Data":"993266efa004f39deef5e6b5d3ceda36be513a9a9a9c6ba4968d4f3ad3a52716"} Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.693166 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.696636 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="c744803c90e354e2e743f2344b9a91906ccb0cb7c96bb653a8671ee32ad69010" exitCode=1 Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.696706 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"c744803c90e354e2e743f2344b9a91906ccb0cb7c96bb653a8671ee32ad69010"} Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.698072 4687 scope.go:117] "RemoveContainer" containerID="c744803c90e354e2e743f2344b9a91906ccb0cb7c96bb653a8671ee32ad69010" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.701218 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="955d7b1be1230c3cd065b412e0a3d78b4e2a9b69ed622511f6312f760a1ba327" exitCode=1 Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.701272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"955d7b1be1230c3cd065b412e0a3d78b4e2a9b69ed622511f6312f760a1ba327"} Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.702093 4687 scope.go:117] "RemoveContainer" containerID="955d7b1be1230c3cd065b412e0a3d78b4e2a9b69ed622511f6312f760a1ba327" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.713188 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7978d574c6-8llvn" podStartSLOduration=2.713163015 podStartE2EDuration="2.713163015s" podCreationTimestamp="2026-03-14 09:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:25.711876373 +0000 UTC m=+1350.700116738" watchObservedRunningTime="2026-03-14 09:19:25.713163015 +0000 UTC m=+1350.701403400" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.718076 4687 generic.go:334] "Generic (PLEG): container finished" podID="eee82ec5-3847-4115-ac3c-5d9590930169" containerID="d0b04688bd212e1bc8da56d39a87a4b30639d095963eaa90585097c1419b8958" exitCode=1 Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.718187 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerDied","Data":"d0b04688bd212e1bc8da56d39a87a4b30639d095963eaa90585097c1419b8958"} Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.719060 4687 scope.go:117] "RemoveContainer" containerID="d0b04688bd212e1bc8da56d39a87a4b30639d095963eaa90585097c1419b8958" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.978215 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:25 crc kubenswrapper[4687]: I0314 09:19:25.978266 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:26 crc kubenswrapper[4687]: I0314 09:19:26.017026 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:26 crc kubenswrapper[4687]: I0314 09:19:26.020752 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:26 crc kubenswrapper[4687]: I0314 09:19:26.726506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:26 crc kubenswrapper[4687]: I0314 09:19:26.727300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:28 crc kubenswrapper[4687]: I0314 09:19:28.750466 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:28 crc kubenswrapper[4687]: I0314 09:19:28.750813 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:28 crc kubenswrapper[4687]: I0314 09:19:28.753412 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:28 crc kubenswrapper[4687]: I0314 09:19:28.753435 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.165379 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.165457 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.206660 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.207409 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.761575 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.761874 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.968319 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 14 09:19:29 crc kubenswrapper[4687]: I0314 09:19:29.979800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4687]: I0314 09:19:30.776573 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.128524 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.128853 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.220660 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.221078 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.532939 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c699fbccb-rdl22"] Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.802054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerStarted","Data":"e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2"} Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.808570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8"} Mar 14 09:19:32 crc kubenswrapper[4687]: I0314 09:19:32.821290 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f"} Mar 14 09:19:34 crc kubenswrapper[4687]: W0314 09:19:34.019972 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3088e0c_b79c_41f3_9fc6_ef0d797943e0.slice/crio-35ce8a0f45be2d737f158284b8c9c9be70cd28fa3956822149b4fe8fa5ab51b7 WatchSource:0}: Error finding container 35ce8a0f45be2d737f158284b8c9c9be70cd28fa3956822149b4fe8fa5ab51b7: Status 404 returned error can't find the container with id 35ce8a0f45be2d737f158284b8c9c9be70cd28fa3956822149b4fe8fa5ab51b7 Mar 14 09:19:34 crc kubenswrapper[4687]: E0314 09:19:34.591752 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.841468 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerStarted","Data":"89b98f8349557e9f7100104cc86a8329f85c425cd4d94806782db185dcccc56f"} Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.841696 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="ceilometer-notification-agent" containerID="cri-o://8514494b301ccfa0f2e2324278482d358e2daae67ba93feed66af9a25e62326d" gracePeriod=30 Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.842019 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.842323 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="proxy-httpd" containerID="cri-o://89b98f8349557e9f7100104cc86a8329f85c425cd4d94806782db185dcccc56f" gracePeriod=30 Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.842401 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="sg-core" containerID="cri-o://cf8626fee02672c19e2ad8d4855eb1be552795c82b5a3a4c9542e44a97e90c2b" gracePeriod=30 Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.847775 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c699fbccb-rdl22" event={"ID":"e3088e0c-b79c-41f3-9fc6-ef0d797943e0","Type":"ContainerStarted","Data":"60e62d5b64fc4fb1df9d69833425bd4b233fa6d4fd872270ef190f9c02470e5c"} Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.847827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c699fbccb-rdl22" event={"ID":"e3088e0c-b79c-41f3-9fc6-ef0d797943e0","Type":"ContainerStarted","Data":"f4a8d2ce55623671d94b7e8bc22b1742272272ba9db37c589148584d87addc02"} Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.847840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c699fbccb-rdl22" event={"ID":"e3088e0c-b79c-41f3-9fc6-ef0d797943e0","Type":"ContainerStarted","Data":"35ce8a0f45be2d737f158284b8c9c9be70cd28fa3956822149b4fe8fa5ab51b7"} Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.848736 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.848774 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:34 crc kubenswrapper[4687]: I0314 09:19:34.904405 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c699fbccb-rdl22" podStartSLOduration=10.904381024 podStartE2EDuration="10.904381024s" podCreationTimestamp="2026-03-14 09:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:34.893039435 +0000 UTC m=+1359.881279810" watchObservedRunningTime="2026-03-14 09:19:34.904381024 +0000 UTC m=+1359.892621409" Mar 14 09:19:35 crc kubenswrapper[4687]: I0314 09:19:35.858521 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerID="89b98f8349557e9f7100104cc86a8329f85c425cd4d94806782db185dcccc56f" exitCode=0 Mar 14 09:19:35 crc kubenswrapper[4687]: I0314 09:19:35.858856 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerID="cf8626fee02672c19e2ad8d4855eb1be552795c82b5a3a4c9542e44a97e90c2b" exitCode=2 Mar 14 09:19:35 crc kubenswrapper[4687]: I0314 09:19:35.858566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerDied","Data":"89b98f8349557e9f7100104cc86a8329f85c425cd4d94806782db185dcccc56f"} Mar 14 09:19:35 crc kubenswrapper[4687]: I0314 09:19:35.858971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerDied","Data":"cf8626fee02672c19e2ad8d4855eb1be552795c82b5a3a4c9542e44a97e90c2b"} Mar 14 09:19:38 crc kubenswrapper[4687]: I0314 09:19:38.747351 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:38 crc kubenswrapper[4687]: I0314 09:19:38.772147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:38 crc kubenswrapper[4687]: I0314 09:19:38.884374 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:38 crc kubenswrapper[4687]: I0314 09:19:38.908077 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.901575 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerID="8514494b301ccfa0f2e2324278482d358e2daae67ba93feed66af9a25e62326d" exitCode=0 Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.901626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerDied","Data":"8514494b301ccfa0f2e2324278482d358e2daae67ba93feed66af9a25e62326d"} Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.903517 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.903632 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.925910 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.926033 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:39 crc kubenswrapper[4687]: I0314 09:19:39.935006 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.016609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.361794 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.506013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.506823 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.506883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.506955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsq56\" (UniqueName: \"kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507056 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507109 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data\") pod \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\" (UID: \"0b74d57b-9951-4e4d-9906-18e5ae0f4010\") " Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507404 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507723 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.507762 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.512140 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56" (OuterVolumeSpecName: "kube-api-access-nsq56") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "kube-api-access-nsq56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.520272 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts" (OuterVolumeSpecName: "scripts") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.545464 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.560911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.587620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data" (OuterVolumeSpecName: "config-data") pod "0b74d57b-9951-4e4d-9906-18e5ae0f4010" (UID: "0b74d57b-9951-4e4d-9906-18e5ae0f4010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609000 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609030 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609044 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609054 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b74d57b-9951-4e4d-9906-18e5ae0f4010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609062 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b74d57b-9951-4e4d-9906-18e5ae0f4010-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.609072 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsq56\" (UniqueName: \"kubernetes.io/projected/0b74d57b-9951-4e4d-9906-18e5ae0f4010-kube-api-access-nsq56\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.913247 4687 generic.go:334] "Generic (PLEG): container finished" podID="eee82ec5-3847-4115-ac3c-5d9590930169" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" exitCode=1 Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.913295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerDied","Data":"e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2"} Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.913719 4687 scope.go:117] "RemoveContainer" containerID="d0b04688bd212e1bc8da56d39a87a4b30639d095963eaa90585097c1419b8958" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.915183 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:40 crc kubenswrapper[4687]: E0314 09:19:40.915565 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.919924 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b74d57b-9951-4e4d-9906-18e5ae0f4010","Type":"ContainerDied","Data":"5a99a1ad3795cc0b0fb4b570b908d9c4335172b3a08edc89f7644c8d9ef54316"} Mar 14 09:19:40 crc kubenswrapper[4687]: I0314 09:19:40.920137 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.018408 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.022504 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038094 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:41 crc kubenswrapper[4687]: E0314 09:19:41.038496 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="proxy-httpd" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038510 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="proxy-httpd" Mar 14 09:19:41 crc kubenswrapper[4687]: E0314 09:19:41.038537 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="sg-core" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038544 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="sg-core" Mar 14 09:19:41 crc kubenswrapper[4687]: E0314 09:19:41.038560 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="ceilometer-notification-agent" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038566 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="ceilometer-notification-agent" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038744 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="ceilometer-notification-agent" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038766 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="proxy-httpd" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.038779 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" containerName="sg-core" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.040320 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.042728 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.044541 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.057276 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.062984 4687 scope.go:117] "RemoveContainer" containerID="89b98f8349557e9f7100104cc86a8329f85c425cd4d94806782db185dcccc56f" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.100026 4687 scope.go:117] "RemoveContainer" containerID="cf8626fee02672c19e2ad8d4855eb1be552795c82b5a3a4c9542e44a97e90c2b" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7rh\" (UniqueName: \"kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.128370 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.142674 4687 scope.go:117] "RemoveContainer" containerID="8514494b301ccfa0f2e2324278482d358e2daae67ba93feed66af9a25e62326d" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.219844 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:41 crc kubenswrapper[4687]: E0314 09:19:41.228728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-mk7rh log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="0085a395-184f-4bc1-a7b7-faa710aa94ab" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.229725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.229786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.229818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.229888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.229910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7rh\" (UniqueName: \"kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.230016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.230082 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.230485 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.230711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.235130 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.250903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.251297 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.260196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7rh\" (UniqueName: \"kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.262074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts\") pod \"ceilometer-0\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.748751 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b74d57b-9951-4e4d-9906-18e5ae0f4010" path="/var/lib/kubelet/pods/0b74d57b-9951-4e4d-9906-18e5ae0f4010/volumes" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.930447 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:41 crc kubenswrapper[4687]: E0314 09:19:41.930707 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.930796 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:41 crc kubenswrapper[4687]: I0314 09:19:41.942271 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.045119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.045455 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.045599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk7rh\" (UniqueName: \"kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.045719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.045883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.046247 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.046378 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd\") pod \"0085a395-184f-4bc1-a7b7-faa710aa94ab\" (UID: \"0085a395-184f-4bc1-a7b7-faa710aa94ab\") " Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.046036 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.046914 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.047520 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.050590 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.050624 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.052767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh" (OuterVolumeSpecName: "kube-api-access-mk7rh") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "kube-api-access-mk7rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.053839 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts" (OuterVolumeSpecName: "scripts") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.053996 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data" (OuterVolumeSpecName: "config-data") pod "0085a395-184f-4bc1-a7b7-faa710aa94ab" (UID: "0085a395-184f-4bc1-a7b7-faa710aa94ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.127711 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.127762 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.129725 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148354 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148387 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0085a395-184f-4bc1-a7b7-faa710aa94ab-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148396 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148404 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148413 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk7rh\" (UniqueName: \"kubernetes.io/projected/0085a395-184f-4bc1-a7b7-faa710aa94ab-kube-api-access-mk7rh\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.148423 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085a395-184f-4bc1-a7b7-faa710aa94ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.220538 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.220793 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.222783 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.169:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.169:8443: connect: connection refused" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.938902 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.987567 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:42 crc kubenswrapper[4687]: I0314 09:19:42.995417 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.014312 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.016894 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.019281 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.019569 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.028692 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.164724 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbnl\" (UniqueName: \"kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.164972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.165023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.165071 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.165184 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.165230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.165620 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbnl\" (UniqueName: \"kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267434 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.267737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.268169 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.268425 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.273267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.273650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.274139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.279287 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.286308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbnl\" (UniqueName: \"kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl\") pod \"ceilometer-0\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.340366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.746903 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0085a395-184f-4bc1-a7b7-faa710aa94ab" path="/var/lib/kubelet/pods/0085a395-184f-4bc1-a7b7-faa710aa94ab/volumes" Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.822985 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:43 crc kubenswrapper[4687]: I0314 09:19:43.950801 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerStarted","Data":"d540a2e46b8744550ac0b1523c33b536571edba4308751e6534e79b0879b994b"} Mar 14 09:19:44 crc kubenswrapper[4687]: I0314 09:19:44.967070 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerStarted","Data":"6e41a61cb27b26220956580ba74d11a6aee94a284ec21a83e6b2272ef597ad91"} Mar 14 09:19:44 crc kubenswrapper[4687]: I0314 09:19:44.967592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerStarted","Data":"4a7d7d135c75c07ff0c56b34d62fcf667a2436aff8a500f75e348192f4bd1454"} Mar 14 09:19:44 crc kubenswrapper[4687]: I0314 09:19:44.971998 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q2lvm" event={"ID":"0c55cdca-7409-4935-8192-c4195a654a45","Type":"ContainerDied","Data":"2a1ef1cd75684a1adea5392f9d083355e8093efd65c41c23db14ae43d773c3b6"} Mar 14 09:19:44 crc kubenswrapper[4687]: I0314 09:19:44.971944 4687 generic.go:334] "Generic (PLEG): container finished" podID="0c55cdca-7409-4935-8192-c4195a654a45" containerID="2a1ef1cd75684a1adea5392f9d083355e8093efd65c41c23db14ae43d773c3b6" exitCode=0 Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.002148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerStarted","Data":"dc9b98663d3644df84e806c8a54be29ed759fad2cadc38818500eab3e406f0e3"} Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.004670 4687 generic.go:334] "Generic (PLEG): container finished" podID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" containerID="1798aa60f8067f36800b36906cbd2bf97ff98e72ce95873d2af8ee6339e7cbea" exitCode=0 Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.004739 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hsvt5" event={"ID":"1ffe58c5-8c6d-4c28-9379-3e08e365adef","Type":"ContainerDied","Data":"1798aa60f8067f36800b36906cbd2bf97ff98e72ce95873d2af8ee6339e7cbea"} Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.365796 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.427531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle\") pod \"0c55cdca-7409-4935-8192-c4195a654a45\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.427581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvb4v\" (UniqueName: \"kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v\") pod \"0c55cdca-7409-4935-8192-c4195a654a45\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.427750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config\") pod \"0c55cdca-7409-4935-8192-c4195a654a45\" (UID: \"0c55cdca-7409-4935-8192-c4195a654a45\") " Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.443581 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v" (OuterVolumeSpecName: "kube-api-access-hvb4v") pod "0c55cdca-7409-4935-8192-c4195a654a45" (UID: "0c55cdca-7409-4935-8192-c4195a654a45"). InnerVolumeSpecName "kube-api-access-hvb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.458842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c55cdca-7409-4935-8192-c4195a654a45" (UID: "0c55cdca-7409-4935-8192-c4195a654a45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.466820 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config" (OuterVolumeSpecName: "config") pod "0c55cdca-7409-4935-8192-c4195a654a45" (UID: "0c55cdca-7409-4935-8192-c4195a654a45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.530123 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.530155 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvb4v\" (UniqueName: \"kubernetes.io/projected/0c55cdca-7409-4935-8192-c4195a654a45-kube-api-access-hvb4v\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:46 crc kubenswrapper[4687]: I0314 09:19:46.530166 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c55cdca-7409-4935-8192-c4195a654a45-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.026928 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q2lvm" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.027044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q2lvm" event={"ID":"0c55cdca-7409-4935-8192-c4195a654a45","Type":"ContainerDied","Data":"1a33fcfb91c963f345eb09202fb215e9aa0cef6087d3ad948721a33be30a8f9f"} Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.027324 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a33fcfb91c963f345eb09202fb215e9aa0cef6087d3ad948721a33be30a8f9f" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.298426 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:47 crc kubenswrapper[4687]: E0314 09:19:47.298828 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c55cdca-7409-4935-8192-c4195a654a45" containerName="neutron-db-sync" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.298842 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c55cdca-7409-4935-8192-c4195a654a45" containerName="neutron-db-sync" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.299051 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c55cdca-7409-4935-8192-c4195a654a45" containerName="neutron-db-sync" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.325724 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.330240 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.348371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.356725 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68w4\" (UniqueName: \"kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.356853 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.356992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.357101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.357125 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.443423 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.445753 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.460395 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.460675 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.460812 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.460982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rxpxc" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.464459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.464640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68w4\" (UniqueName: \"kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.464789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.464960 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.465048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.465106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.467026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.468049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.468300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.468997 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.469384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.500147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68w4\" (UniqueName: \"kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4\") pod \"dnsmasq-dns-76d97895f9-56lls\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.500634 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.514030 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.565781 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data\") pod \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.566352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg8s4\" (UniqueName: \"kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4\") pod \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.566486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle\") pod \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\" (UID: \"1ffe58c5-8c6d-4c28-9379-3e08e365adef\") " Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.566755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.566835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2mf\" (UniqueName: \"kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.566932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.567062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.567249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.575537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1ffe58c5-8c6d-4c28-9379-3e08e365adef" (UID: "1ffe58c5-8c6d-4c28-9379-3e08e365adef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.602573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4" (OuterVolumeSpecName: "kube-api-access-gg8s4") pod "1ffe58c5-8c6d-4c28-9379-3e08e365adef" (UID: "1ffe58c5-8c6d-4c28-9379-3e08e365adef"). InnerVolumeSpecName "kube-api-access-gg8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.668573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ffe58c5-8c6d-4c28-9379-3e08e365adef" (UID: "1ffe58c5-8c6d-4c28-9379-3e08e365adef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.669725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.669779 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2mf\" (UniqueName: \"kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.669816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.669862 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.669947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.670010 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg8s4\" (UniqueName: \"kubernetes.io/projected/1ffe58c5-8c6d-4c28-9379-3e08e365adef-kube-api-access-gg8s4\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.670026 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.670035 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ffe58c5-8c6d-4c28-9379-3e08e365adef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.683225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.685558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.707115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.713161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2mf\" (UniqueName: \"kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.726683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config\") pod \"neutron-678d959c44-m48jt\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.789953 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:47 crc kubenswrapper[4687]: I0314 09:19:47.826358 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.043402 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerStarted","Data":"b1ff5296af28bbe8f6161445345d3bac87f285384341792a7fdda1373d579468"} Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.043567 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.046572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hsvt5" event={"ID":"1ffe58c5-8c6d-4c28-9379-3e08e365adef","Type":"ContainerDied","Data":"9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6"} Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.046607 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a671e7f24436c0046e813d6a34de634d1bb5cb8342a765af1654dda0bf497e6" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.046667 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hsvt5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.080811 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.423950831 podStartE2EDuration="6.080790432s" podCreationTimestamp="2026-03-14 09:19:42 +0000 UTC" firstStartedPulling="2026-03-14 09:19:43.827309122 +0000 UTC m=+1368.815549517" lastFinishedPulling="2026-03-14 09:19:47.484148753 +0000 UTC m=+1372.472389118" observedRunningTime="2026-03-14 09:19:48.070039966 +0000 UTC m=+1373.058280341" watchObservedRunningTime="2026-03-14 09:19:48.080790432 +0000 UTC m=+1373.069030807" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.282252 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d74b7768-djqrt"] Mar 14 09:19:48 crc kubenswrapper[4687]: E0314 09:19:48.283060 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" containerName="barbican-db-sync" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.283086 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" containerName="barbican-db-sync" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.283316 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" containerName="barbican-db-sync" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.286037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.294088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.294403 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cpdhf" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.294536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.299651 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cb9cbbf5c-2kkj5"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.301358 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.312591 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.336056 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d74b7768-djqrt"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.354612 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cb9cbbf5c-2kkj5"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393416 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393455 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb7n\" (UniqueName: \"kubernetes.io/projected/215e8fbe-617a-4af4-9ba2-497e59e04008-kube-api-access-hzb7n\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data-custom\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0127f00-aece-46a2-86ea-42ce2ee74619-logs\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-combined-ca-bundle\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-combined-ca-bundle\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e8fbe-617a-4af4-9ba2-497e59e04008-logs\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twdb\" (UniqueName: \"kubernetes.io/projected/b0127f00-aece-46a2-86ea-42ce2ee74619-kube-api-access-7twdb\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.393738 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data-custom\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.433182 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.485470 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.487699 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495385 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzb7n\" (UniqueName: \"kubernetes.io/projected/215e8fbe-617a-4af4-9ba2-497e59e04008-kube-api-access-hzb7n\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data-custom\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0127f00-aece-46a2-86ea-42ce2ee74619-logs\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-combined-ca-bundle\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-combined-ca-bundle\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495633 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e8fbe-617a-4af4-9ba2-497e59e04008-logs\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7twdb\" (UniqueName: \"kubernetes.io/projected/b0127f00-aece-46a2-86ea-42ce2ee74619-kube-api-access-7twdb\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.495715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data-custom\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.496501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.497236 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e8fbe-617a-4af4-9ba2-497e59e04008-logs\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.497702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0127f00-aece-46a2-86ea-42ce2ee74619-logs\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.510546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-combined-ca-bundle\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.515196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data-custom\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.515767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-combined-ca-bundle\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.520642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.521287 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e8fbe-617a-4af4-9ba2-497e59e04008-config-data-custom\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.521969 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twdb\" (UniqueName: \"kubernetes.io/projected/b0127f00-aece-46a2-86ea-42ce2ee74619-kube-api-access-7twdb\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.533648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0127f00-aece-46a2-86ea-42ce2ee74619-config-data\") pod \"barbican-keystone-listener-d74b7768-djqrt\" (UID: \"b0127f00-aece-46a2-86ea-42ce2ee74619\") " pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.544504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzb7n\" (UniqueName: \"kubernetes.io/projected/215e8fbe-617a-4af4-9ba2-497e59e04008-kube-api-access-hzb7n\") pod \"barbican-worker-5cb9cbbf5c-2kkj5\" (UID: \"215e8fbe-617a-4af4-9ba2-497e59e04008\") " pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.595843 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.598221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599444 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599463 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57mj\" (UniqueName: \"kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.599597 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.602934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.617916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.619517 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.660834 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.688693 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705388 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t57mj\" (UniqueName: \"kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705749 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2v9\" (UniqueName: \"kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.705811 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.706962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.713155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.713586 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.713795 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.714324 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.747087 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.748021 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:48 crc kubenswrapper[4687]: E0314 09:19:48.748858 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.748930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.748986 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.750869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t57mj\" (UniqueName: \"kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj\") pod \"dnsmasq-dns-5556d8f8bc-tzfvs\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.759171 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.812222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2v9\" (UniqueName: \"kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.812278 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.812302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.812470 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.812535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.814020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.817229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.824463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.835160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.835875 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.846042 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2v9\" (UniqueName: \"kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9\") pod \"barbican-api-54647747f8-shnqv\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:48 crc kubenswrapper[4687]: I0314 09:19:48.850366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.159643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerStarted","Data":"47b214eb186409398fce120bd9fb6f8af9b56f826d490657cc3e4ece7fa0789a"} Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.196509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d97895f9-56lls" event={"ID":"9028d03f-07db-495d-bccc-445cbb02d8f8","Type":"ContainerStarted","Data":"9dd853fe42eb0b8d6ac56d1532c57867b2d7d5d8a217377a7b90efcc9161a7cd"} Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.198389 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:49 crc kubenswrapper[4687]: E0314 09:19:49.198938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.373470 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d74b7768-djqrt"] Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.539845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cb9cbbf5c-2kkj5"] Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.665367 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:19:49 crc kubenswrapper[4687]: I0314 09:19:49.711482 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.228250 4687 generic.go:334] "Generic (PLEG): container finished" podID="9028d03f-07db-495d-bccc-445cbb02d8f8" containerID="20bd2c4dea7876b687c61e84ae31068e5dce6369825abe5a3101e47dd29683b8" exitCode=0 Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.228708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d97895f9-56lls" event={"ID":"9028d03f-07db-495d-bccc-445cbb02d8f8","Type":"ContainerDied","Data":"20bd2c4dea7876b687c61e84ae31068e5dce6369825abe5a3101e47dd29683b8"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.233475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerStarted","Data":"af110cf0b8923c49db3abd0ccecab1ebd91ae41e8fddb80fe367fbd1ebb95818"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.233502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerStarted","Data":"7674b4651bbe967e1e53183f4f3edf0c6b2e2263a6598c6c66413573976b050f"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.242667 4687 generic.go:334] "Generic (PLEG): container finished" podID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerID="a8a2884f704e3cae89a3ccdf0fe112cf81f6f3479151a60e2d108f319ffa5fca" exitCode=0 Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.242793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" event={"ID":"755f36df-5ba1-4c0d-9e67-66f8fdef5e70","Type":"ContainerDied","Data":"a8a2884f704e3cae89a3ccdf0fe112cf81f6f3479151a60e2d108f319ffa5fca"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.242823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" event={"ID":"755f36df-5ba1-4c0d-9e67-66f8fdef5e70","Type":"ContainerStarted","Data":"74000bc628b2942980910529046fab8e947d86bfa9b04255bed502d018146550"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.254282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerStarted","Data":"f71fe131d63201d760eef61d7816f3435637b2f9bf20bd4a091dc55990b846ce"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.254348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerStarted","Data":"36a096caf16577d162b0501d9f77c54030ea93ef69ba68060153fecd180fe864"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.255245 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.284509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" event={"ID":"b0127f00-aece-46a2-86ea-42ce2ee74619","Type":"ContainerStarted","Data":"c3627e61da316d25dff332cdb06a483c9e7a4cffa5ebebc99dd64910bdf7d292"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.294989 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.295535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" event={"ID":"215e8fbe-617a-4af4-9ba2-497e59e04008","Type":"ContainerStarted","Data":"8a62147b678778138454577e0a2186520c44aa46ed12ac733340bdbd02f50270"} Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.325508 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-678d959c44-m48jt" podStartSLOduration=3.325486926 podStartE2EDuration="3.325486926s" podCreationTimestamp="2026-03-14 09:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:50.280470824 +0000 UTC m=+1375.268711219" watchObservedRunningTime="2026-03-14 09:19:50.325486926 +0000 UTC m=+1375.313727311" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.522397 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6ff6c58d89-bss4w"] Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.530383 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.543990 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.544271 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.566459 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff6c58d89-bss4w"] Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.624528 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq5z\" (UniqueName: \"kubernetes.io/projected/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-kube-api-access-jsq5z\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.624865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-ovndb-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.624923 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.624947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-internal-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.625005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-combined-ca-bundle\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.625037 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-public-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.625072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-httpd-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsq5z\" (UniqueName: \"kubernetes.io/projected/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-kube-api-access-jsq5z\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728226 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-ovndb-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-internal-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728365 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-combined-ca-bundle\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728398 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-public-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.728429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-httpd-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.747120 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-ovndb-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.748068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-combined-ca-bundle\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.748687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-httpd-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.757175 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-config\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.758814 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-internal-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.760431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-public-tls-certs\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.761348 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsq5z\" (UniqueName: \"kubernetes.io/projected/61b16c55-e6d0-4d0c-b2df-d6940fc67dd8-kube-api-access-jsq5z\") pod \"neutron-6ff6c58d89-bss4w\" (UID: \"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8\") " pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.870429 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:50 crc kubenswrapper[4687]: I0314 09:19:50.888024 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.050724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.051087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.051196 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.051296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.051372 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.051446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k68w4\" (UniqueName: \"kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4\") pod \"9028d03f-07db-495d-bccc-445cbb02d8f8\" (UID: \"9028d03f-07db-495d-bccc-445cbb02d8f8\") " Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.078571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4" (OuterVolumeSpecName: "kube-api-access-k68w4") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "kube-api-access-k68w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.117228 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.165232 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.165261 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k68w4\" (UniqueName: \"kubernetes.io/projected/9028d03f-07db-495d-bccc-445cbb02d8f8-kube-api-access-k68w4\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.191549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.192218 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.214947 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config" (OuterVolumeSpecName: "config") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.226905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9028d03f-07db-495d-bccc-445cbb02d8f8" (UID: "9028d03f-07db-495d-bccc-445cbb02d8f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.266734 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.266770 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.266781 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.266790 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9028d03f-07db-495d-bccc-445cbb02d8f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.349458 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d97895f9-56lls" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.352441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d97895f9-56lls" event={"ID":"9028d03f-07db-495d-bccc-445cbb02d8f8","Type":"ContainerDied","Data":"9dd853fe42eb0b8d6ac56d1532c57867b2d7d5d8a217377a7b90efcc9161a7cd"} Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.352501 4687 scope.go:117] "RemoveContainer" containerID="20bd2c4dea7876b687c61e84ae31068e5dce6369825abe5a3101e47dd29683b8" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.356354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerStarted","Data":"2ddbe2f0e9354450cc2bc3ad40ebd49b91e3605492591c35def7479f0d8b4b57"} Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.356624 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.356671 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.366282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" event={"ID":"755f36df-5ba1-4c0d-9e67-66f8fdef5e70","Type":"ContainerStarted","Data":"4df4ae0295025616d6ccace95094fe5bb92746ed228e9e1433e249c710404b91"} Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.367310 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.382793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerStarted","Data":"ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c"} Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.383676 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-54647747f8-shnqv" podStartSLOduration=3.383654028 podStartE2EDuration="3.383654028s" podCreationTimestamp="2026-03-14 09:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:51.37484487 +0000 UTC m=+1376.363085245" watchObservedRunningTime="2026-03-14 09:19:51.383654028 +0000 UTC m=+1376.371894403" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.416217 4687 generic.go:334] "Generic (PLEG): container finished" podID="21832052-3293-4320-aed2-58a020acb502" containerID="93eb5b6111a0bc45abc0164906acf568235ca1195233623645c8313217837de5" exitCode=0 Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.417124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k6kmw" event={"ID":"21832052-3293-4320-aed2-58a020acb502","Type":"ContainerDied","Data":"93eb5b6111a0bc45abc0164906acf568235ca1195233623645c8313217837de5"} Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.565944 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.590148 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76d97895f9-56lls"] Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.600708 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" podStartSLOduration=3.600682069 podStartE2EDuration="3.600682069s" podCreationTimestamp="2026-03-14 09:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:51.485603176 +0000 UTC m=+1376.473843551" watchObservedRunningTime="2026-03-14 09:19:51.600682069 +0000 UTC m=+1376.588922444" Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.722664 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff6c58d89-bss4w"] Mar 14 09:19:51 crc kubenswrapper[4687]: I0314 09:19:51.766365 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9028d03f-07db-495d-bccc-445cbb02d8f8" path="/var/lib/kubelet/pods/9028d03f-07db-495d-bccc-445cbb02d8f8/volumes" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.135771 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.136116 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.136658 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f"} pod="openstack/horizon-74f987fc4-zw2rw" containerMessage="Container horizon failed startup probe, will be restarted" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.136693 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerName="horizon" containerID="cri-o://dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f" gracePeriod=30 Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.225500 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.169:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.169:8443: connect: connection refused" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.225817 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.226719 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8"} pod="openstack/horizon-7dcd9ff5b-bprxd" containerMessage="Container horizon failed startup probe, will be restarted" Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.226940 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" containerID="cri-o://90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8" gracePeriod=30 Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.439836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff6c58d89-bss4w" event={"ID":"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8","Type":"ContainerStarted","Data":"2d8593853bd1989289fdbf5dc3fdb987610fddc7a540fd0b27d91369040adcce"} Mar 14 09:19:52 crc kubenswrapper[4687]: I0314 09:19:52.441144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff6c58d89-bss4w" event={"ID":"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8","Type":"ContainerStarted","Data":"e3122d05d5f2c39cd56f565a845d0bc9fe35faf0cefb4bd25440644c0fd30dd9"} Mar 14 09:19:53 crc kubenswrapper[4687]: I0314 09:19:53.957079 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072493 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072633 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md8fz\" (UniqueName: \"kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072828 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.072879 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts\") pod \"21832052-3293-4320-aed2-58a020acb502\" (UID: \"21832052-3293-4320-aed2-58a020acb502\") " Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.083414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.086589 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz" (OuterVolumeSpecName: "kube-api-access-md8fz") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "kube-api-access-md8fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.087860 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.096505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts" (OuterVolumeSpecName: "scripts") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.113649 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.113984 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.160660 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-966dfd5fd-rdsjp"] Mar 14 09:19:54 crc kubenswrapper[4687]: E0314 09:19:54.161125 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21832052-3293-4320-aed2-58a020acb502" containerName="cinder-db-sync" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.161139 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="21832052-3293-4320-aed2-58a020acb502" containerName="cinder-db-sync" Mar 14 09:19:54 crc kubenswrapper[4687]: E0314 09:19:54.161156 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9028d03f-07db-495d-bccc-445cbb02d8f8" containerName="init" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.161163 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9028d03f-07db-495d-bccc-445cbb02d8f8" containerName="init" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.161353 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="21832052-3293-4320-aed2-58a020acb502" containerName="cinder-db-sync" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.161378 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9028d03f-07db-495d-bccc-445cbb02d8f8" containerName="init" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.162431 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.172037 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.172226 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.177789 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21832052-3293-4320-aed2-58a020acb502-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.177818 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.177829 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.177839 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md8fz\" (UniqueName: \"kubernetes.io/projected/21832052-3293-4320-aed2-58a020acb502-kube-api-access-md8fz\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.195718 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-966dfd5fd-rdsjp"] Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.209495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.229268 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data" (OuterVolumeSpecName: "config-data") pod "21832052-3293-4320-aed2-58a020acb502" (UID: "21832052-3293-4320-aed2-58a020acb502"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.283023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.283127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-combined-ca-bundle\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288462 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7nx\" (UniqueName: \"kubernetes.io/projected/165379e3-6caa-4b42-b61a-ef153a72f7d2-kube-api-access-5h7nx\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-public-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288684 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165379e3-6caa-4b42-b61a-ef153a72f7d2-logs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-internal-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data-custom\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288902 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.288925 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21832052-3293-4320-aed2-58a020acb502-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165379e3-6caa-4b42-b61a-ef153a72f7d2-logs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-internal-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390454 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data-custom\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390552 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-combined-ca-bundle\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7nx\" (UniqueName: \"kubernetes.io/projected/165379e3-6caa-4b42-b61a-ef153a72f7d2-kube-api-access-5h7nx\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.390682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-public-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.393727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165379e3-6caa-4b42-b61a-ef153a72f7d2-logs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.399837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-internal-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.403049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-public-tls-certs\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.404706 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.408875 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-config-data-custom\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.431983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7nx\" (UniqueName: \"kubernetes.io/projected/165379e3-6caa-4b42-b61a-ef153a72f7d2-kube-api-access-5h7nx\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.432224 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165379e3-6caa-4b42-b61a-ef153a72f7d2-combined-ca-bundle\") pod \"barbican-api-966dfd5fd-rdsjp\" (UID: \"165379e3-6caa-4b42-b61a-ef153a72f7d2\") " pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.529868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k6kmw" event={"ID":"21832052-3293-4320-aed2-58a020acb502","Type":"ContainerDied","Data":"2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b"} Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.529937 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2744ce917f7fa432dbe5b823e6dde5b7afbada1d2dc30d7da6a10c312072cc6b" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.530040 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k6kmw" Mar 14 09:19:54 crc kubenswrapper[4687]: I0314 09:19:54.533026 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:19:54 crc kubenswrapper[4687]: E0314 09:19:54.704248 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21832052_3293_4320_aed2_58a020acb502.slice\": RecentStats: unable to find data in memory cache]" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.234265 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.236966 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.240822 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.253153 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.274887 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.275748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-72582" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.295509 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffvp\" (UniqueName: \"kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360602 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360736 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.360794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.423484 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.423712 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="dnsmasq-dns" containerID="cri-o://4df4ae0295025616d6ccace95094fe5bb92746ed228e9e1433e249c710404b91" gracePeriod=10 Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.431750 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.462249 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffvp\" (UniqueName: \"kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.462769 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.462863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.463063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.463132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.463179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.463270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.471091 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.493906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.500129 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.511266 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.516843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffvp\" (UniqueName: \"kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp\") pod \"cinder-scheduler-0\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.550746 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.552750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.581536 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8" exitCode=1 Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.581626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8"} Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.581658 4687 scope.go:117] "RemoveContainer" containerID="c744803c90e354e2e743f2344b9a91906ccb0cb7c96bb653a8671ee32ad69010" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.600422 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.615615 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f" exitCode=1 Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.615668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f"} Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.674929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.674979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ls4\" (UniqueName: \"kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.675006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.675040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.675075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.675182 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.686421 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.787087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.791444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.791842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.792026 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.792192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ls4\" (UniqueName: \"kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.792364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.790315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.800504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.800554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.802477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.807869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.832557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ls4\" (UniqueName: \"kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4\") pod \"dnsmasq-dns-94bb9f455-tr46j\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.854978 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.869294 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.879798 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.905042 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:19:55 crc kubenswrapper[4687]: I0314 09:19:55.974687 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjfg6\" (UniqueName: \"kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998851 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:55.998981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103263 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103366 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.103580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjfg6\" (UniqueName: \"kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.104666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.105822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.110130 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.110277 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.114243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.134953 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.159935 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjfg6\" (UniqueName: \"kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6\") pod \"cinder-api-0\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.210712 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.650456 4687 generic.go:334] "Generic (PLEG): container finished" podID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerID="4df4ae0295025616d6ccace95094fe5bb92746ed228e9e1433e249c710404b91" exitCode=0 Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.650497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" event={"ID":"755f36df-5ba1-4c0d-9e67-66f8fdef5e70","Type":"ContainerDied","Data":"4df4ae0295025616d6ccace95094fe5bb92746ed228e9e1433e249c710404b91"} Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.766931 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.784842 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:19:56 crc kubenswrapper[4687]: I0314 09:19:56.977495 4687 scope.go:117] "RemoveContainer" containerID="955d7b1be1230c3cd065b412e0a3d78b4e2a9b69ed622511f6312f760a1ba327" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.429655 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.568736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.569026 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.569051 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.569088 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.569249 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.569283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t57mj\" (UniqueName: \"kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj\") pod \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\" (UID: \"755f36df-5ba1-4c0d-9e67-66f8fdef5e70\") " Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.596456 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj" (OuterVolumeSpecName: "kube-api-access-t57mj") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "kube-api-access-t57mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.666889 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.672313 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t57mj\" (UniqueName: \"kubernetes.io/projected/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-kube-api-access-t57mj\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.672350 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.679313 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" event={"ID":"755f36df-5ba1-4c0d-9e67-66f8fdef5e70","Type":"ContainerDied","Data":"74000bc628b2942980910529046fab8e947d86bfa9b04255bed502d018146550"} Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.679388 4687 scope.go:117] "RemoveContainer" containerID="4df4ae0295025616d6ccace95094fe5bb92746ed228e9e1433e249c710404b91" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.679518 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5556d8f8bc-tzfvs" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.690634 4687 generic.go:334] "Generic (PLEG): container finished" podID="eee82ec5-3847-4115-ac3c-5d9590930169" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" exitCode=1 Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.690736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerDied","Data":"ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c"} Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.692038 4687 scope.go:117] "RemoveContainer" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" Mar 14 09:19:57 crc kubenswrapper[4687]: E0314 09:19:57.692561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.703897 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.762663 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.781311 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.849687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config" (OuterVolumeSpecName: "config") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.887423 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.920491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4687]: I0314 09:19:57.995656 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.012530 4687 scope.go:117] "RemoveContainer" containerID="a8a2884f704e3cae89a3ccdf0fe112cf81f6f3479151a60e2d108f319ffa5fca" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.140001 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "755f36df-5ba1-4c0d-9e67-66f8fdef5e70" (UID: "755f36df-5ba1-4c0d-9e67-66f8fdef5e70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:58 crc kubenswrapper[4687]: W0314 09:19:58.203176 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod301e6b08_76a6_4dfb_bf19_32452e2053d1.slice/crio-e9a8c05f3cd017370f7cad03801462710a6756c2f535422f0f61d167663ce229 WatchSource:0}: Error finding container e9a8c05f3cd017370f7cad03801462710a6756c2f535422f0f61d167663ce229: Status 404 returned error can't find the container with id e9a8c05f3cd017370f7cad03801462710a6756c2f535422f0f61d167663ce229 Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.204119 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/755f36df-5ba1-4c0d-9e67-66f8fdef5e70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.402125 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c699fbccb-rdl22" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.402487 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.402507 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.402518 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-966dfd5fd-rdsjp"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.402529 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.419613 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.586574 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.647433 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5556d8f8bc-tzfvs"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.684060 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.747435 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.747492 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.786756 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff6c58d89-bss4w" event={"ID":"61b16c55-e6d0-4d0c-b2df-d6940fc67dd8","Type":"ContainerStarted","Data":"7967632eb4291145f615fe9c1cf1044401de5163233ea895ecc5d385e2baa8b6"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.786816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.799012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-966dfd5fd-rdsjp" event={"ID":"165379e3-6caa-4b42-b61a-ef153a72f7d2","Type":"ContainerStarted","Data":"5e62d78377ad862c8ac15b38287198ec05faffab9528c25667392d407e60f969"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.808410 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" event={"ID":"683995cb-d47d-415b-b471-e7c225b8f997","Type":"ContainerStarted","Data":"70cbd898a739715b8850b33d3d8dfe6670bdd15f4ebd937f9cdf48e22ee5e7d8"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.824687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.833038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerStarted","Data":"e9a8c05f3cd017370f7cad03801462710a6756c2f535422f0f61d167663ce229"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.852480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38"} Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.852676 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f7cf44c9b-7cnmp" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-log" containerID="cri-o://cab98d08164cf4c2aec6e204485abc621dfc232bf9af36cf4f62727915dce228" gracePeriod=30 Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.852823 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f7cf44c9b-7cnmp" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-api" containerID="cri-o://e9022b45966226831f90b1e9c94bad3b1e0b94a15e818bcc823da46d2f3ef897" gracePeriod=30 Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.854601 4687 scope.go:117] "RemoveContainer" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" Mar 14 09:19:58 crc kubenswrapper[4687]: E0314 09:19:58.854770 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.885247 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6ff6c58d89-bss4w" podStartSLOduration=8.885222902 podStartE2EDuration="8.885222902s" podCreationTimestamp="2026-03-14 09:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:58.836658922 +0000 UTC m=+1383.824899297" watchObservedRunningTime="2026-03-14 09:19:58.885222902 +0000 UTC m=+1383.873463287" Mar 14 09:19:58 crc kubenswrapper[4687]: I0314 09:19:58.982197 4687 scope.go:117] "RemoveContainer" containerID="e7e2c2583dc8c6000f10d1812faef3b8abc8a4d066a74213ee61264559a4b4b2" Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.781543 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" path="/var/lib/kubelet/pods/755f36df-5ba1-4c0d-9e67-66f8fdef5e70/volumes" Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.885741 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerStarted","Data":"91575f2fee03d259fbc5d3c0cc52fd974924bc3f27903a5b1659ec49534603b7"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.906520 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-966dfd5fd-rdsjp" event={"ID":"165379e3-6caa-4b42-b61a-ef153a72f7d2","Type":"ContainerStarted","Data":"37ac8e39a11f333a958d81351d3051049f4f64e3433b8fe67413406d95f549d5"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.921501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" event={"ID":"b0127f00-aece-46a2-86ea-42ce2ee74619","Type":"ContainerStarted","Data":"65d549c73575034ef45b1eb0bdf9565c55ac023096453ed30a98ed82b9a4e962"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.927677 4687 generic.go:334] "Generic (PLEG): container finished" podID="683995cb-d47d-415b-b471-e7c225b8f997" containerID="4edea96f03c9881f2cafa9956880933b71b94a19d5e5c5d3b4a02455bd4a3f72" exitCode=0 Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.927858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" event={"ID":"683995cb-d47d-415b-b471-e7c225b8f997","Type":"ContainerDied","Data":"4edea96f03c9881f2cafa9956880933b71b94a19d5e5c5d3b4a02455bd4a3f72"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.940500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" event={"ID":"215e8fbe-617a-4af4-9ba2-497e59e04008","Type":"ContainerStarted","Data":"af8e11d5a2cb2d4bcca4d68fe2797caece19ab91a904cd62a0cc42e140d45c35"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.940544 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" event={"ID":"215e8fbe-617a-4af4-9ba2-497e59e04008","Type":"ContainerStarted","Data":"c2af93f87ecb894a0ca37bec0dff1df1708b14a4408e3d19580fbea843ed1af1"} Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.966470 4687 generic.go:334] "Generic (PLEG): container finished" podID="944ab990-3a74-471a-b889-6992cdd509b7" containerID="cab98d08164cf4c2aec6e204485abc621dfc232bf9af36cf4f62727915dce228" exitCode=143 Mar 14 09:19:59 crc kubenswrapper[4687]: I0314 09:19:59.968395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerDied","Data":"cab98d08164cf4c2aec6e204485abc621dfc232bf9af36cf4f62727915dce228"} Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.007718 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cb9cbbf5c-2kkj5" podStartSLOduration=4.565056782 podStartE2EDuration="12.007700101s" podCreationTimestamp="2026-03-14 09:19:48 +0000 UTC" firstStartedPulling="2026-03-14 09:19:49.564944337 +0000 UTC m=+1374.553184712" lastFinishedPulling="2026-03-14 09:19:57.007587656 +0000 UTC m=+1381.995828031" observedRunningTime="2026-03-14 09:19:59.982457998 +0000 UTC m=+1384.970698373" watchObservedRunningTime="2026-03-14 09:20:00.007700101 +0000 UTC m=+1384.995940476" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.165419 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558000-g74b2"] Mar 14 09:20:00 crc kubenswrapper[4687]: E0314 09:20:00.165895 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="init" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.165916 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="init" Mar 14 09:20:00 crc kubenswrapper[4687]: E0314 09:20:00.165929 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="dnsmasq-dns" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.165936 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="dnsmasq-dns" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.166120 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="755f36df-5ba1-4c0d-9e67-66f8fdef5e70" containerName="dnsmasq-dns" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.166862 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.172530 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-g74b2"] Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.172778 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.172997 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.173477 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.200666 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5sx\" (UniqueName: \"kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx\") pod \"auto-csr-approver-29558000-g74b2\" (UID: \"739e6342-59a1-411b-9d90-2fe6f07e4301\") " pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.305142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5sx\" (UniqueName: \"kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx\") pod \"auto-csr-approver-29558000-g74b2\" (UID: \"739e6342-59a1-411b-9d90-2fe6f07e4301\") " pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.322153 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5sx\" (UniqueName: \"kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx\") pod \"auto-csr-approver-29558000-g74b2\" (UID: \"739e6342-59a1-411b-9d90-2fe6f07e4301\") " pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:00 crc kubenswrapper[4687]: I0314 09:20:00.541490 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:00.998691 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerStarted","Data":"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.011680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerDied","Data":"e9022b45966226831f90b1e9c94bad3b1e0b94a15e818bcc823da46d2f3ef897"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.012254 4687 generic.go:334] "Generic (PLEG): container finished" podID="944ab990-3a74-471a-b889-6992cdd509b7" containerID="e9022b45966226831f90b1e9c94bad3b1e0b94a15e818bcc823da46d2f3ef897" exitCode=0 Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.018905 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerStarted","Data":"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.067475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-966dfd5fd-rdsjp" event={"ID":"165379e3-6caa-4b42-b61a-ef153a72f7d2","Type":"ContainerStarted","Data":"5393a49c1922cabc71cd03a0c3da034365f98f6cf22bb640b16c3b7cc7533d3e"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.068147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.068311 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.099543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" event={"ID":"b0127f00-aece-46a2-86ea-42ce2ee74619","Type":"ContainerStarted","Data":"f05a6b39cb5bf6cef8f9c4a2129ad71b6870721605d22b079c79d6d36629e0df"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.143425 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-966dfd5fd-rdsjp" podStartSLOduration=7.143401079 podStartE2EDuration="7.143401079s" podCreationTimestamp="2026-03-14 09:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:01.118973535 +0000 UTC m=+1386.107213910" watchObservedRunningTime="2026-03-14 09:20:01.143401079 +0000 UTC m=+1386.131641454" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.201426 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-g74b2"] Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.206726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" event={"ID":"683995cb-d47d-415b-b471-e7c225b8f997","Type":"ContainerStarted","Data":"35496752d966cec18e4145df003d278058bc4e65df982592129dae2e8c62569b"} Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.215753 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d74b7768-djqrt" podStartSLOduration=5.590998899 podStartE2EDuration="13.215734416s" podCreationTimestamp="2026-03-14 09:19:48 +0000 UTC" firstStartedPulling="2026-03-14 09:19:49.373637271 +0000 UTC m=+1374.361877646" lastFinishedPulling="2026-03-14 09:19:56.998372788 +0000 UTC m=+1381.986613163" observedRunningTime="2026-03-14 09:20:01.140118488 +0000 UTC m=+1386.128358873" watchObservedRunningTime="2026-03-14 09:20:01.215734416 +0000 UTC m=+1386.203974791" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.228904 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" podStartSLOduration=6.228885191 podStartE2EDuration="6.228885191s" podCreationTimestamp="2026-03-14 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:01.227899256 +0000 UTC m=+1386.216139641" watchObservedRunningTime="2026-03-14 09:20:01.228885191 +0000 UTC m=+1386.217125566" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.526044 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659274 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659404 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659484 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z64jc\" (UniqueName: \"kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659650 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.659693 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs\") pod \"944ab990-3a74-471a-b889-6992cdd509b7\" (UID: \"944ab990-3a74-471a-b889-6992cdd509b7\") " Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.663712 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs" (OuterVolumeSpecName: "logs") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.678809 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts" (OuterVolumeSpecName: "scripts") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.683454 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc" (OuterVolumeSpecName: "kube-api-access-z64jc") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "kube-api-access-z64jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.763437 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z64jc\" (UniqueName: \"kubernetes.io/projected/944ab990-3a74-471a-b889-6992cdd509b7-kube-api-access-z64jc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.763467 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944ab990-3a74-471a-b889-6992cdd509b7-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.763478 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.766471 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.781643 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data" (OuterVolumeSpecName: "config-data") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.844944 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.854435 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "944ab990-3a74-471a-b889-6992cdd509b7" (UID: "944ab990-3a74-471a-b889-6992cdd509b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.868762 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.869011 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.869103 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4687]: I0314 09:20:01.869180 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ab990-3a74-471a-b889-6992cdd509b7-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.130484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.131727 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.218531 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-g74b2" event={"ID":"739e6342-59a1-411b-9d90-2fe6f07e4301","Type":"ContainerStarted","Data":"cd76e41c4f254cbb8b30239ff30cc4158011fe3516eeed78e45cb0d5eab53680"} Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.219857 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.219911 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.220712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7cf44c9b-7cnmp" event={"ID":"944ab990-3a74-471a-b889-6992cdd509b7","Type":"ContainerDied","Data":"dd798f6a8c148d73c57d4cc1509bcdb78986505d887cc22a5c3d316043a2be9f"} Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.220735 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7cf44c9b-7cnmp" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.220777 4687 scope.go:117] "RemoveContainer" containerID="e9022b45966226831f90b1e9c94bad3b1e0b94a15e818bcc823da46d2f3ef897" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.223038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerStarted","Data":"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f"} Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.223812 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.223952 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api-log" containerID="cri-o://d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae" gracePeriod=30 Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.223951 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api" containerID="cri-o://a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f" gracePeriod=30 Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.248642 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.248616013 podStartE2EDuration="7.248616013s" podCreationTimestamp="2026-03-14 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:02.246454199 +0000 UTC m=+1387.234694584" watchObservedRunningTime="2026-03-14 09:20:02.248616013 +0000 UTC m=+1387.236856388" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.271340 4687 scope.go:117] "RemoveContainer" containerID="cab98d08164cf4c2aec6e204485abc621dfc232bf9af36cf4f62727915dce228" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.313891 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.323965 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5f7cf44c9b-7cnmp"] Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.930574 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-54647747f8-shnqv" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.187:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.930639 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-54647747f8-shnqv" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.187:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:20:02 crc kubenswrapper[4687]: I0314 09:20:02.930723 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.112450 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.235751 4687 generic.go:334] "Generic (PLEG): container finished" podID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerID="d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae" exitCode=143 Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.235826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerDied","Data":"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae"} Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.241478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerStarted","Data":"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67"} Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.268973 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.29501377 podStartE2EDuration="8.26894952s" podCreationTimestamp="2026-03-14 09:19:55 +0000 UTC" firstStartedPulling="2026-03-14 09:19:58.213063507 +0000 UTC m=+1383.201303882" lastFinishedPulling="2026-03-14 09:19:59.186999257 +0000 UTC m=+1384.175239632" observedRunningTime="2026-03-14 09:20:03.262577872 +0000 UTC m=+1388.250818267" watchObservedRunningTime="2026-03-14 09:20:03.26894952 +0000 UTC m=+1388.257189915" Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.749424 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944ab990-3a74-471a-b889-6992cdd509b7" path="/var/lib/kubelet/pods/944ab990-3a74-471a-b889-6992cdd509b7/volumes" Mar 14 09:20:03 crc kubenswrapper[4687]: I0314 09:20:03.990663 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7978d574c6-8llvn" Mar 14 09:20:04 crc kubenswrapper[4687]: I0314 09:20:04.255653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-g74b2" event={"ID":"739e6342-59a1-411b-9d90-2fe6f07e4301","Type":"ContainerStarted","Data":"b60d8f5cfe0d663b0626d8fe209e949615a2fc12bb8b8d6db79d514266675a68"} Mar 14 09:20:04 crc kubenswrapper[4687]: I0314 09:20:04.281937 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558000-g74b2" podStartSLOduration=1.8545539070000001 podStartE2EDuration="4.281913294s" podCreationTimestamp="2026-03-14 09:20:00 +0000 UTC" firstStartedPulling="2026-03-14 09:20:01.340407346 +0000 UTC m=+1386.328647721" lastFinishedPulling="2026-03-14 09:20:03.767766733 +0000 UTC m=+1388.756007108" observedRunningTime="2026-03-14 09:20:04.27162105 +0000 UTC m=+1389.259861435" watchObservedRunningTime="2026-03-14 09:20:04.281913294 +0000 UTC m=+1389.270153669" Mar 14 09:20:05 crc kubenswrapper[4687]: I0314 09:20:05.265461 4687 generic.go:334] "Generic (PLEG): container finished" podID="739e6342-59a1-411b-9d90-2fe6f07e4301" containerID="b60d8f5cfe0d663b0626d8fe209e949615a2fc12bb8b8d6db79d514266675a68" exitCode=0 Mar 14 09:20:05 crc kubenswrapper[4687]: I0314 09:20:05.265614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-g74b2" event={"ID":"739e6342-59a1-411b-9d90-2fe6f07e4301","Type":"ContainerDied","Data":"b60d8f5cfe0d663b0626d8fe209e949615a2fc12bb8b8d6db79d514266675a68"} Mar 14 09:20:05 crc kubenswrapper[4687]: I0314 09:20:05.601953 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 09:20:05 crc kubenswrapper[4687]: I0314 09:20:05.604475 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.190:8080/\": dial tcp 10.217.0.190:8080: connect: connection refused" Mar 14 09:20:05 crc kubenswrapper[4687]: I0314 09:20:05.976485 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.044369 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.044848 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="dnsmasq-dns" containerID="cri-o://fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd" gracePeriod=10 Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.213740 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.724662 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.980546 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:20:06 crc kubenswrapper[4687]: I0314 09:20:06.987697 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5sx\" (UniqueName: \"kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx\") pod \"739e6342-59a1-411b-9d90-2fe6f07e4301\" (UID: \"739e6342-59a1-411b-9d90-2fe6f07e4301\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102443 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.102515 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ppps\" (UniqueName: \"kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps\") pod \"2b207391-08eb-4ce1-aebf-a49c10b21fed\" (UID: \"2b207391-08eb-4ce1-aebf-a49c10b21fed\") " Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.121280 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps" (OuterVolumeSpecName: "kube-api-access-2ppps") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "kube-api-access-2ppps". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.135298 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx" (OuterVolumeSpecName: "kube-api-access-8g5sx") pod "739e6342-59a1-411b-9d90-2fe6f07e4301" (UID: "739e6342-59a1-411b-9d90-2fe6f07e4301"). InnerVolumeSpecName "kube-api-access-8g5sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.183555 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config" (OuterVolumeSpecName: "config") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.197140 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.210403 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.210439 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5sx\" (UniqueName: \"kubernetes.io/projected/739e6342-59a1-411b-9d90-2fe6f07e4301-kube-api-access-8g5sx\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.210448 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.210457 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ppps\" (UniqueName: \"kubernetes.io/projected/2b207391-08eb-4ce1-aebf-a49c10b21fed-kube-api-access-2ppps\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.217966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.219210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.227006 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b207391-08eb-4ce1-aebf-a49c10b21fed" (UID: "2b207391-08eb-4ce1-aebf-a49c10b21fed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.311690 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.311728 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.311737 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b207391-08eb-4ce1-aebf-a49c10b21fed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.327904 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerID="fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd" exitCode=0 Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.327964 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" event={"ID":"2b207391-08eb-4ce1-aebf-a49c10b21fed","Type":"ContainerDied","Data":"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd"} Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.327991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" event={"ID":"2b207391-08eb-4ce1-aebf-a49c10b21fed","Type":"ContainerDied","Data":"fec0e867bb416ad8c665a5d31ab67ba4aad466700f324c838ebfcf4648a8be90"} Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.328006 4687 scope.go:117] "RemoveContainer" containerID="fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.328125 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95d6d9649-krjq9" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.348672 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-g74b2" event={"ID":"739e6342-59a1-411b-9d90-2fe6f07e4301","Type":"ContainerDied","Data":"cd76e41c4f254cbb8b30239ff30cc4158011fe3516eeed78e45cb0d5eab53680"} Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.348712 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd76e41c4f254cbb8b30239ff30cc4158011fe3516eeed78e45cb0d5eab53680" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.348762 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-g74b2" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.378741 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-dnlms"] Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.381829 4687 scope.go:117] "RemoveContainer" containerID="458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.393448 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-dnlms"] Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.405305 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.417927 4687 scope.go:117] "RemoveContainer" containerID="fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.421463 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95d6d9649-krjq9"] Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.423813 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd\": container with ID starting with fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd not found: ID does not exist" containerID="fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.423986 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd"} err="failed to get container status \"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd\": rpc error: code = NotFound desc = could not find container \"fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd\": container with ID starting with fc52c89b6d2130fc96f332138af9a5a6a2b036a5eee4e9d239a3c56f6234e4cd not found: ID does not exist" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.424096 4687 scope.go:117] "RemoveContainer" containerID="458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008" Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.425008 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008\": container with ID starting with 458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008 not found: ID does not exist" containerID="458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.425132 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008"} err="failed to get container status \"458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008\": rpc error: code = NotFound desc = could not find container \"458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008\": container with ID starting with 458812bf8aef4b850d52e89d7f19c6a3063591cc73aa00ecc44ea3e42fbbe008 not found: ID does not exist" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.570575 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.571289 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="init" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571311 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="init" Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.571326 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-log" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571414 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-log" Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.571445 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739e6342-59a1-411b-9d90-2fe6f07e4301" containerName="oc" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571454 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="739e6342-59a1-411b-9d90-2fe6f07e4301" containerName="oc" Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.571467 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="dnsmasq-dns" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571476 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="dnsmasq-dns" Mar 14 09:20:07 crc kubenswrapper[4687]: E0314 09:20:07.571493 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-api" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571500 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-api" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571722 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-api" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571738 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" containerName="dnsmasq-dns" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571753 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="944ab990-3a74-471a-b889-6992cdd509b7" containerName="placement-log" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.571769 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="739e6342-59a1-411b-9d90-2fe6f07e4301" containerName="oc" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.573283 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.575511 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.576399 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.576554 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vnb5t" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.589631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.723091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config-secret\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.723187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.723265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cphnm\" (UniqueName: \"kubernetes.io/projected/a285a7f0-8991-4a29-a2b0-2c31bcba7433-kube-api-access-cphnm\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.723313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.747897 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b207391-08eb-4ce1-aebf-a49c10b21fed" path="/var/lib/kubelet/pods/2b207391-08eb-4ce1-aebf-a49c10b21fed/volumes" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.748517 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48f6277-8485-4374-858f-43ddb712771a" path="/var/lib/kubelet/pods/c48f6277-8485-4374-858f-43ddb712771a/volumes" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.824746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.824849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cphnm\" (UniqueName: \"kubernetes.io/projected/a285a7f0-8991-4a29-a2b0-2c31bcba7433-kube-api-access-cphnm\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.824894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.825778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config-secret\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.826029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.832874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-openstack-config-secret\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.834940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a285a7f0-8991-4a29-a2b0-2c31bcba7433-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.847004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cphnm\" (UniqueName: \"kubernetes.io/projected/a285a7f0-8991-4a29-a2b0-2c31bcba7433-kube-api-access-cphnm\") pod \"openstackclient\" (UID: \"a285a7f0-8991-4a29-a2b0-2c31bcba7433\") " pod="openstack/openstackclient" Mar 14 09:20:07 crc kubenswrapper[4687]: I0314 09:20:07.888209 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 09:20:08 crc kubenswrapper[4687]: I0314 09:20:08.774597 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:08 crc kubenswrapper[4687]: W0314 09:20:08.784904 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda285a7f0_8991_4a29_a2b0_2c31bcba7433.slice/crio-140682aff22cc9f8e52dde4b1f6d8cbfee8e3ca83dbe816b74b18d5ad81115cd WatchSource:0}: Error finding container 140682aff22cc9f8e52dde4b1f6d8cbfee8e3ca83dbe816b74b18d5ad81115cd: Status 404 returned error can't find the container with id 140682aff22cc9f8e52dde4b1f6d8cbfee8e3ca83dbe816b74b18d5ad81115cd Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.078457 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-966dfd5fd-rdsjp" Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.146945 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.147225 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-54647747f8-shnqv" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api-log" containerID="cri-o://af110cf0b8923c49db3abd0ccecab1ebd91ae41e8fddb80fe367fbd1ebb95818" gracePeriod=30 Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.147383 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-54647747f8-shnqv" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api" containerID="cri-o://2ddbe2f0e9354450cc2bc3ad40ebd49b91e3605492591c35def7479f0d8b4b57" gracePeriod=30 Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.438389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a285a7f0-8991-4a29-a2b0-2c31bcba7433","Type":"ContainerStarted","Data":"140682aff22cc9f8e52dde4b1f6d8cbfee8e3ca83dbe816b74b18d5ad81115cd"} Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.440344 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerID="af110cf0b8923c49db3abd0ccecab1ebd91ae41e8fddb80fe367fbd1ebb95818" exitCode=143 Mar 14 09:20:09 crc kubenswrapper[4687]: I0314 09:20:09.440382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerDied","Data":"af110cf0b8923c49db3abd0ccecab1ebd91ae41e8fddb80fe367fbd1ebb95818"} Mar 14 09:20:10 crc kubenswrapper[4687]: I0314 09:20:10.896084 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 09:20:10 crc kubenswrapper[4687]: I0314 09:20:10.970487 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.095625 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.495742 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerID="2ddbe2f0e9354450cc2bc3ad40ebd49b91e3605492591c35def7479f0d8b4b57" exitCode=0 Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.495947 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="cinder-scheduler" containerID="cri-o://1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" gracePeriod=30 Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.496270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerDied","Data":"2ddbe2f0e9354450cc2bc3ad40ebd49b91e3605492591c35def7479f0d8b4b57"} Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.496301 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54647747f8-shnqv" event={"ID":"a8b580fa-9c8c-410c-bd26-5699daa1a15f","Type":"ContainerDied","Data":"7674b4651bbe967e1e53183f4f3edf0c6b2e2263a6598c6c66413573976b050f"} Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.496311 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7674b4651bbe967e1e53183f4f3edf0c6b2e2263a6598c6c66413573976b050f" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.496656 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="probe" containerID="cri-o://e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" gracePeriod=30 Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.510423 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.560620 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom\") pod \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.560702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs\") pod \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.560723 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle\") pod \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.560772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data\") pod \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.560796 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2v9\" (UniqueName: \"kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9\") pod \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\" (UID: \"a8b580fa-9c8c-410c-bd26-5699daa1a15f\") " Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.564747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs" (OuterVolumeSpecName: "logs") pod "a8b580fa-9c8c-410c-bd26-5699daa1a15f" (UID: "a8b580fa-9c8c-410c-bd26-5699daa1a15f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.587834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9" (OuterVolumeSpecName: "kube-api-access-7d2v9") pod "a8b580fa-9c8c-410c-bd26-5699daa1a15f" (UID: "a8b580fa-9c8c-410c-bd26-5699daa1a15f"). InnerVolumeSpecName "kube-api-access-7d2v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.599463 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8b580fa-9c8c-410c-bd26-5699daa1a15f" (UID: "a8b580fa-9c8c-410c-bd26-5699daa1a15f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.667614 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.667645 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b580fa-9c8c-410c-bd26-5699daa1a15f-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.667654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2v9\" (UniqueName: \"kubernetes.io/projected/a8b580fa-9c8c-410c-bd26-5699daa1a15f-kube-api-access-7d2v9\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.706469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data" (OuterVolumeSpecName: "config-data") pod "a8b580fa-9c8c-410c-bd26-5699daa1a15f" (UID: "a8b580fa-9c8c-410c-bd26-5699daa1a15f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.709290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8b580fa-9c8c-410c-bd26-5699daa1a15f" (UID: "a8b580fa-9c8c-410c-bd26-5699daa1a15f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.770699 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:11 crc kubenswrapper[4687]: I0314 09:20:11.770740 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b580fa-9c8c-410c-bd26-5699daa1a15f-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:12 crc kubenswrapper[4687]: I0314 09:20:12.131325 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Mar 14 09:20:12 crc kubenswrapper[4687]: I0314 09:20:12.221363 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.169:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.169:8443: connect: connection refused" Mar 14 09:20:12 crc kubenswrapper[4687]: I0314 09:20:12.506305 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54647747f8-shnqv" Mar 14 09:20:12 crc kubenswrapper[4687]: I0314 09:20:12.583201 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:20:12 crc kubenswrapper[4687]: I0314 09:20:12.594950 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-54647747f8-shnqv"] Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.144834 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.305409 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.305469 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.306252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.306355 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sffvp\" (UniqueName: \"kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.306428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.306471 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id\") pod \"301e6b08-76a6-4dfb-bf19-32452e2053d1\" (UID: \"301e6b08-76a6-4dfb-bf19-32452e2053d1\") " Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.306899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.311620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp" (OuterVolumeSpecName: "kube-api-access-sffvp") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "kube-api-access-sffvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.311790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.326926 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts" (OuterVolumeSpecName: "scripts") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.369909 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.422666 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.422983 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sffvp\" (UniqueName: \"kubernetes.io/projected/301e6b08-76a6-4dfb-bf19-32452e2053d1-kube-api-access-sffvp\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.423179 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.423197 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301e6b08-76a6-4dfb-bf19-32452e2053d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.480234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.507471 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data" (OuterVolumeSpecName: "config-data") pod "301e6b08-76a6-4dfb-bf19-32452e2053d1" (UID: "301e6b08-76a6-4dfb-bf19-32452e2053d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.524847 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.524886 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301e6b08-76a6-4dfb-bf19-32452e2053d1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541261 4687 generic.go:334] "Generic (PLEG): container finished" podID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerID="e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" exitCode=0 Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541302 4687 generic.go:334] "Generic (PLEG): container finished" podID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerID="1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" exitCode=0 Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541370 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerDied","Data":"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67"} Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541431 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerDied","Data":"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f"} Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301e6b08-76a6-4dfb-bf19-32452e2053d1","Type":"ContainerDied","Data":"e9a8c05f3cd017370f7cad03801462710a6756c2f535422f0f61d167663ce229"} Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541468 4687 scope.go:117] "RemoveContainer" containerID="e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.541707 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.577392 4687 scope.go:117] "RemoveContainer" containerID="1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.602245 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.632778 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.645711 4687 scope.go:117] "RemoveContainer" containerID="e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.646238 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67\": container with ID starting with e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67 not found: ID does not exist" containerID="e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.646405 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67"} err="failed to get container status \"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67\": rpc error: code = NotFound desc = could not find container \"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67\": container with ID starting with e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67 not found: ID does not exist" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.646563 4687 scope.go:117] "RemoveContainer" containerID="1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.647742 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f\": container with ID starting with 1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f not found: ID does not exist" containerID="1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.647803 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f"} err="failed to get container status \"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f\": rpc error: code = NotFound desc = could not find container \"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f\": container with ID starting with 1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f not found: ID does not exist" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.647847 4687 scope.go:117] "RemoveContainer" containerID="e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.648233 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67"} err="failed to get container status \"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67\": rpc error: code = NotFound desc = could not find container \"e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67\": container with ID starting with e77e41c7037c729b3f868b1fc794ed23437a8922c51e9968c2b4baab297dbc67 not found: ID does not exist" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.648303 4687 scope.go:117] "RemoveContainer" containerID="1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.649010 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f"} err="failed to get container status \"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f\": rpc error: code = NotFound desc = could not find container \"1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f\": container with ID starting with 1e9d1b405a0749d60446d7312b5a2c8bb094f3f1da6363b5d4cd106911a6816f not found: ID does not exist" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.652319 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.653119 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.653226 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.653314 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="probe" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.653389 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="probe" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.653451 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api-log" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.653533 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api-log" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.653600 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="cinder-scheduler" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.653657 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="cinder-scheduler" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.653918 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api-log" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.654028 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="cinder-scheduler" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.654119 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" containerName="probe" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.654201 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" containerName="barbican-api" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.662656 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.665205 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.668629 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.736568 4687 scope.go:117] "RemoveContainer" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" Mar 14 09:20:13 crc kubenswrapper[4687]: E0314 09:20:13.736833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(eee82ec5-3847-4115-ac3c-5d9590930169)\"" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.750712 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e6b08-76a6-4dfb-bf19-32452e2053d1" path="/var/lib/kubelet/pods/301e6b08-76a6-4dfb-bf19-32452e2053d1/volumes" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.751576 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b580fa-9c8c-410c-bd26-5699daa1a15f" path="/var/lib/kubelet/pods/a8b580fa-9c8c-410c-bd26-5699daa1a15f/volumes" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.833831 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.833905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlj4x\" (UniqueName: \"kubernetes.io/projected/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-kube-api-access-tlj4x\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.834007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.834042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.834104 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.834189 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlj4x\" (UniqueName: \"kubernetes.io/projected/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-kube-api-access-tlj4x\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.935582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.936155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.939790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.939877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.939898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.941097 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.953980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlj4x\" (UniqueName: \"kubernetes.io/projected/4abc7dfc-bc68-41b0-ba51-175e0febcc3b-kube-api-access-tlj4x\") pod \"cinder-scheduler-0\" (UID: \"4abc7dfc-bc68-41b0-ba51-175e0febcc3b\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:13 crc kubenswrapper[4687]: I0314 09:20:13.979388 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:14 crc kubenswrapper[4687]: I0314 09:20:14.505051 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:14 crc kubenswrapper[4687]: W0314 09:20:14.528780 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4abc7dfc_bc68_41b0_ba51_175e0febcc3b.slice/crio-04fc59e4d053b11db02142d672d57e37930172e19cba18029f936a3633faaf50 WatchSource:0}: Error finding container 04fc59e4d053b11db02142d672d57e37930172e19cba18029f936a3633faaf50: Status 404 returned error can't find the container with id 04fc59e4d053b11db02142d672d57e37930172e19cba18029f936a3633faaf50 Mar 14 09:20:14 crc kubenswrapper[4687]: I0314 09:20:14.608501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4abc7dfc-bc68-41b0-ba51-175e0febcc3b","Type":"ContainerStarted","Data":"04fc59e4d053b11db02142d672d57e37930172e19cba18029f936a3633faaf50"} Mar 14 09:20:15 crc kubenswrapper[4687]: I0314 09:20:15.622852 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4abc7dfc-bc68-41b0-ba51-175e0febcc3b","Type":"ContainerStarted","Data":"8c9d1ab29343cfa9097f743cf78f587f9608ee098f84d97610cd025d55babd54"} Mar 14 09:20:16 crc kubenswrapper[4687]: I0314 09:20:16.635934 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4abc7dfc-bc68-41b0-ba51-175e0febcc3b","Type":"ContainerStarted","Data":"da8ff87562523f8a73c3ae6e84b7aa6f7154c535df70beb63e0c7d8e834f21e5"} Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.007737 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.007719348 podStartE2EDuration="4.007719348s" podCreationTimestamp="2026-03-14 09:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:16.657254921 +0000 UTC m=+1401.645495296" watchObservedRunningTime="2026-03-14 09:20:17.007719348 +0000 UTC m=+1401.995959723" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.008998 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d6889c85c-hl9hs"] Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.010630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.016982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.017166 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.017295 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.056578 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d6889c85c-hl9hs"] Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.096812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2qk\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-kube-api-access-kc2qk\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.096881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-etc-swift\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.096915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-log-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.096939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-combined-ca-bundle\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.097005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-internal-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.097029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-config-data\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.097074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-public-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.097175 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-run-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-run-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2qk\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-kube-api-access-kc2qk\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-etc-swift\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-log-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-combined-ca-bundle\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-internal-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-config-data\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.199444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-public-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.259217 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-run-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.259407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/167eccc2-08cb-4683-a74b-360da7bfb902-log-httpd\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.259968 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-internal-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.260694 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-etc-swift\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.262316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-config-data\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.321610 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-public-tls-certs\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.323441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167eccc2-08cb-4683-a74b-360da7bfb902-combined-ca-bundle\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.843225 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.940675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2qk\" (UniqueName: \"kubernetes.io/projected/167eccc2-08cb-4683-a74b-360da7bfb902-kube-api-access-kc2qk\") pod \"swift-proxy-d6889c85c-hl9hs\" (UID: \"167eccc2-08cb-4683-a74b-360da7bfb902\") " pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:17 crc kubenswrapper[4687]: I0314 09:20:17.972158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.003642 4687 scope.go:117] "RemoveContainer" containerID="46bd9be0aa50b97cf9cdc8260301084d866ddce692d5fdcc6a89b1cc7904998c" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.669640 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" exitCode=1 Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.669908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38"} Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.670013 4687 scope.go:117] "RemoveContainer" containerID="dbeadd0ce81251209ab9c0ba0c638b3b88d2ef95487772b9c9df234b5b5a8b3f" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.671029 4687 scope.go:117] "RemoveContainer" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" Mar 14 09:20:18 crc kubenswrapper[4687]: E0314 09:20:18.671549 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.747235 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.747279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.748104 4687 scope.go:117] "RemoveContainer" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" Mar 14 09:20:18 crc kubenswrapper[4687]: I0314 09:20:18.979967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 09:20:19 crc kubenswrapper[4687]: I0314 09:20:19.682665 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" exitCode=1 Mar 14 09:20:19 crc kubenswrapper[4687]: I0314 09:20:19.682707 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b"} Mar 14 09:20:19 crc kubenswrapper[4687]: I0314 09:20:19.683319 4687 scope.go:117] "RemoveContainer" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" Mar 14 09:20:19 crc kubenswrapper[4687]: E0314 09:20:19.683614 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.013887 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.014219 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-central-agent" containerID="cri-o://4a7d7d135c75c07ff0c56b34d62fcf667a2436aff8a500f75e348192f4bd1454" gracePeriod=30 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.014274 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="proxy-httpd" containerID="cri-o://b1ff5296af28bbe8f6161445345d3bac87f285384341792a7fdda1373d579468" gracePeriod=30 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.014298 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-notification-agent" containerID="cri-o://6e41a61cb27b26220956580ba74d11a6aee94a284ec21a83e6b2272ef597ad91" gracePeriod=30 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.014371 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="sg-core" containerID="cri-o://dc9b98663d3644df84e806c8a54be29ed759fad2cadc38818500eab3e406f0e3" gracePeriod=30 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.700029 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerID="b1ff5296af28bbe8f6161445345d3bac87f285384341792a7fdda1373d579468" exitCode=0 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.700313 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerID="dc9b98663d3644df84e806c8a54be29ed759fad2cadc38818500eab3e406f0e3" exitCode=2 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.700130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerDied","Data":"b1ff5296af28bbe8f6161445345d3bac87f285384341792a7fdda1373d579468"} Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.700361 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerDied","Data":"dc9b98663d3644df84e806c8a54be29ed759fad2cadc38818500eab3e406f0e3"} Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.882965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6ff6c58d89-bss4w" Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.957923 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.958176 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-678d959c44-m48jt" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-api" containerID="cri-o://36a096caf16577d162b0501d9f77c54030ea93ef69ba68060153fecd180fe864" gracePeriod=30 Mar 14 09:20:20 crc kubenswrapper[4687]: I0314 09:20:20.958241 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-678d959c44-m48jt" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-httpd" containerID="cri-o://f71fe131d63201d760eef61d7816f3435637b2f9bf20bd4a091dc55990b846ce" gracePeriod=30 Mar 14 09:20:21 crc kubenswrapper[4687]: I0314 09:20:21.768091 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerID="4a7d7d135c75c07ff0c56b34d62fcf667a2436aff8a500f75e348192f4bd1454" exitCode=0 Mar 14 09:20:21 crc kubenswrapper[4687]: I0314 09:20:21.768134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerDied","Data":"4a7d7d135c75c07ff0c56b34d62fcf667a2436aff8a500f75e348192f4bd1454"} Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.128127 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.128424 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.129245 4687 scope.go:117] "RemoveContainer" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" Mar 14 09:20:22 crc kubenswrapper[4687]: E0314 09:20:22.129567 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.220038 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.220075 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.220722 4687 scope.go:117] "RemoveContainer" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" Mar 14 09:20:22 crc kubenswrapper[4687]: E0314 09:20:22.220910 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.779062 4687 generic.go:334] "Generic (PLEG): container finished" podID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerID="f71fe131d63201d760eef61d7816f3435637b2f9bf20bd4a091dc55990b846ce" exitCode=0 Mar 14 09:20:22 crc kubenswrapper[4687]: I0314 09:20:22.779117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerDied","Data":"f71fe131d63201d760eef61d7816f3435637b2f9bf20bd4a091dc55990b846ce"} Mar 14 09:20:23 crc kubenswrapper[4687]: I0314 09:20:23.794883 4687 generic.go:334] "Generic (PLEG): container finished" podID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerID="36a096caf16577d162b0501d9f77c54030ea93ef69ba68060153fecd180fe864" exitCode=0 Mar 14 09:20:23 crc kubenswrapper[4687]: I0314 09:20:23.795041 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerDied","Data":"36a096caf16577d162b0501d9f77c54030ea93ef69ba68060153fecd180fe864"} Mar 14 09:20:23 crc kubenswrapper[4687]: I0314 09:20:23.798791 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerID="6e41a61cb27b26220956580ba74d11a6aee94a284ec21a83e6b2272ef597ad91" exitCode=0 Mar 14 09:20:23 crc kubenswrapper[4687]: I0314 09:20:23.798831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerDied","Data":"6e41a61cb27b26220956580ba74d11a6aee94a284ec21a83e6b2272ef597ad91"} Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.111621 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.111699 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.111751 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.112719 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.112800 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4" gracePeriod=600 Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.130993 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.815786 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4" exitCode=0 Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.815868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4"} Mar 14 09:20:24 crc kubenswrapper[4687]: I0314 09:20:24.860032 4687 scope.go:117] "RemoveContainer" containerID="90322d8d48bdae5a609ca186375c80a744c7ac169706cbbb8f4fb51792e497f8" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.017620 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073211 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073297 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073353 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbnl\" (UniqueName: \"kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073465 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.073513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml\") pod \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\" (UID: \"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.074544 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.074634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.075234 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.075257 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.082354 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.084127 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts" (OuterVolumeSpecName: "scripts") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.084290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl" (OuterVolumeSpecName: "kube-api-access-nxbnl") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "kube-api-access-nxbnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.149519 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.176238 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle\") pod \"b86756d4-4a7b-47d8-9ed2-00e8684001db\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.176302 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2mf\" (UniqueName: \"kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf\") pod \"b86756d4-4a7b-47d8-9ed2-00e8684001db\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.176327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config\") pod \"b86756d4-4a7b-47d8-9ed2-00e8684001db\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.176457 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config\") pod \"b86756d4-4a7b-47d8-9ed2-00e8684001db\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.176488 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs\") pod \"b86756d4-4a7b-47d8-9ed2-00e8684001db\" (UID: \"b86756d4-4a7b-47d8-9ed2-00e8684001db\") " Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.178220 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbnl\" (UniqueName: \"kubernetes.io/projected/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-kube-api-access-nxbnl\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.178369 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.178379 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.182571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf" (OuterVolumeSpecName: "kube-api-access-sj2mf") pod "b86756d4-4a7b-47d8-9ed2-00e8684001db" (UID: "b86756d4-4a7b-47d8-9ed2-00e8684001db"). InnerVolumeSpecName "kube-api-access-sj2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.188608 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b86756d4-4a7b-47d8-9ed2-00e8684001db" (UID: "b86756d4-4a7b-47d8-9ed2-00e8684001db"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.238647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b86756d4-4a7b-47d8-9ed2-00e8684001db" (UID: "b86756d4-4a7b-47d8-9ed2-00e8684001db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.244722 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config" (OuterVolumeSpecName: "config") pod "b86756d4-4a7b-47d8-9ed2-00e8684001db" (UID: "b86756d4-4a7b-47d8-9ed2-00e8684001db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.254400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.254726 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data" (OuterVolumeSpecName: "config-data") pod "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" (UID: "9ddc1d90-42f2-49eb-86f8-c19aba8e3c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.270117 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b86756d4-4a7b-47d8-9ed2-00e8684001db" (UID: "b86756d4-4a7b-47d8-9ed2-00e8684001db"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282110 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282139 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2mf\" (UniqueName: \"kubernetes.io/projected/b86756d4-4a7b-47d8-9ed2-00e8684001db-kube-api-access-sj2mf\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282151 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282160 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282168 4687 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b86756d4-4a7b-47d8-9ed2-00e8684001db-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282178 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.282186 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.293103 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d6889c85c-hl9hs"] Mar 14 09:20:25 crc kubenswrapper[4687]: W0314 09:20:25.300863 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod167eccc2_08cb_4683_a74b_360da7bfb902.slice/crio-d68d744324c427e29ff24e41ecfd897ee4c92f43902c9bc7b22d1cdc9f8c29b3 WatchSource:0}: Error finding container d68d744324c427e29ff24e41ecfd897ee4c92f43902c9bc7b22d1cdc9f8c29b3: Status 404 returned error can't find the container with id d68d744324c427e29ff24e41ecfd897ee4c92f43902c9bc7b22d1cdc9f8c29b3 Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.790758 4687 scope.go:117] "RemoveContainer" containerID="2f9eff755ca9916fb0c668c56199ac76bcf1d96b450abed59fab8ee32c4dd1b7" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.840435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ddc1d90-42f2-49eb-86f8-c19aba8e3c27","Type":"ContainerDied","Data":"d540a2e46b8744550ac0b1523c33b536571edba4308751e6534e79b0879b994b"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.840568 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.855196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerStarted","Data":"c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.858997 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.863223 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678d959c44-m48jt" event={"ID":"b86756d4-4a7b-47d8-9ed2-00e8684001db","Type":"ContainerDied","Data":"47b214eb186409398fce120bd9fb6f8af9b56f826d490657cc3e4ece7fa0789a"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.863397 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678d959c44-m48jt" Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.865897 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d6889c85c-hl9hs" event={"ID":"167eccc2-08cb-4683-a74b-360da7bfb902","Type":"ContainerStarted","Data":"05a0b3dd336f029074cc6e717bed166c960dda7ff97bba4ac9e7e2eae27c2c58"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.865935 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d6889c85c-hl9hs" event={"ID":"167eccc2-08cb-4683-a74b-360da7bfb902","Type":"ContainerStarted","Data":"d68d744324c427e29ff24e41ecfd897ee4c92f43902c9bc7b22d1cdc9f8c29b3"} Mar 14 09:20:25 crc kubenswrapper[4687]: I0314 09:20:25.989164 4687 scope.go:117] "RemoveContainer" containerID="b1ff5296af28bbe8f6161445345d3bac87f285384341792a7fdda1373d579468" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.038451 4687 scope.go:117] "RemoveContainer" containerID="dc9b98663d3644df84e806c8a54be29ed759fad2cadc38818500eab3e406f0e3" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.061297 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.076758 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.085301 4687 scope.go:117] "RemoveContainer" containerID="6e41a61cb27b26220956580ba74d11a6aee94a284ec21a83e6b2272ef597ad91" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.089673 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.114286 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-678d959c44-m48jt"] Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.134523 4687 scope.go:117] "RemoveContainer" containerID="4a7d7d135c75c07ff0c56b34d62fcf667a2436aff8a500f75e348192f4bd1454" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.134930 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135453 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-api" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135468 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-api" Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135501 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-central-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135509 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-central-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135518 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="proxy-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135525 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="proxy-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135535 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135542 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135557 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="sg-core" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135565 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="sg-core" Mar 14 09:20:26 crc kubenswrapper[4687]: E0314 09:20:26.135583 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-notification-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135591 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-notification-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135788 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-central-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135816 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135829 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" containerName="neutron-api" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135841 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="proxy-httpd" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135852 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="ceilometer-notification-agent" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.135859 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" containerName="sg-core" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.139114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.142268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.142712 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.149397 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.186438 4687 scope.go:117] "RemoveContainer" containerID="f71fe131d63201d760eef61d7816f3435637b2f9bf20bd4a091dc55990b846ce" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203327 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bdc\" (UniqueName: \"kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203517 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203537 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.203618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.217535 4687 scope.go:117] "RemoveContainer" containerID="36a096caf16577d162b0501d9f77c54030ea93ef69ba68060153fecd180fe864" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bdc\" (UniqueName: \"kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.305511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.306912 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.306921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.309247 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.309779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.309967 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.311311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.329134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bdc\" (UniqueName: \"kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc\") pod \"ceilometer-0\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.462865 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.909704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d6889c85c-hl9hs" event={"ID":"167eccc2-08cb-4683-a74b-360da7bfb902","Type":"ContainerStarted","Data":"a103d47a7922f2f00cd01769a98b3be9735acb04484a2366a424223e66b414eb"} Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.909923 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.909972 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.936646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a285a7f0-8991-4a29-a2b0-2c31bcba7433","Type":"ContainerStarted","Data":"89d704cd3197534f30b65c87d8f2369b4b988e04fa09520643be57082eb7c84e"} Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.943790 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d6889c85c-hl9hs" podStartSLOduration=10.943771574 podStartE2EDuration="10.943771574s" podCreationTimestamp="2026-03-14 09:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:26.940531203 +0000 UTC m=+1411.928771578" watchObservedRunningTime="2026-03-14 09:20:26.943771574 +0000 UTC m=+1411.932011949" Mar 14 09:20:26 crc kubenswrapper[4687]: I0314 09:20:26.970590 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.885583351 podStartE2EDuration="19.970573626s" podCreationTimestamp="2026-03-14 09:20:07 +0000 UTC" firstStartedPulling="2026-03-14 09:20:08.787428411 +0000 UTC m=+1393.775668786" lastFinishedPulling="2026-03-14 09:20:25.872418686 +0000 UTC m=+1410.860659061" observedRunningTime="2026-03-14 09:20:26.965700965 +0000 UTC m=+1411.953941340" watchObservedRunningTime="2026-03-14 09:20:26.970573626 +0000 UTC m=+1411.958814001" Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.061265 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:27 crc kubenswrapper[4687]: W0314 09:20:27.062252 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb9b408_bcf8_49f9_bbcb_43aa386b9134.slice/crio-8f43346fece4ae75840b941e123bb1d4e529e54d6dafa224c99f553d32a13362 WatchSource:0}: Error finding container 8f43346fece4ae75840b941e123bb1d4e529e54d6dafa224c99f553d32a13362: Status 404 returned error can't find the container with id 8f43346fece4ae75840b941e123bb1d4e529e54d6dafa224c99f553d32a13362 Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.258583 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.748698 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ddc1d90-42f2-49eb-86f8-c19aba8e3c27" path="/var/lib/kubelet/pods/9ddc1d90-42f2-49eb-86f8-c19aba8e3c27/volumes" Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.749725 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86756d4-4a7b-47d8-9ed2-00e8684001db" path="/var/lib/kubelet/pods/b86756d4-4a7b-47d8-9ed2-00e8684001db/volumes" Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.955137 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerStarted","Data":"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c"} Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.956326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerStarted","Data":"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1"} Mar 14 09:20:27 crc kubenswrapper[4687]: I0314 09:20:27.956422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerStarted","Data":"8f43346fece4ae75840b941e123bb1d4e529e54d6dafa224c99f553d32a13362"} Mar 14 09:20:28 crc kubenswrapper[4687]: I0314 09:20:28.747552 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:28 crc kubenswrapper[4687]: I0314 09:20:28.776889 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:28 crc kubenswrapper[4687]: I0314 09:20:28.966461 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerStarted","Data":"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e"} Mar 14 09:20:28 crc kubenswrapper[4687]: I0314 09:20:28.966781 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.006320 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.052374 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.295609 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.295850 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-log" containerID="cri-o://8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc" gracePeriod=30 Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.295953 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-httpd" containerID="cri-o://d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74" gracePeriod=30 Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.977408 4687 generic.go:334] "Generic (PLEG): container finished" podID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerID="8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc" exitCode=143 Mar 14 09:20:29 crc kubenswrapper[4687]: I0314 09:20:29.977487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerDied","Data":"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc"} Mar 14 09:20:30 crc kubenswrapper[4687]: I0314 09:20:30.985411 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" containerID="cri-o://c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" gracePeriod=30 Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.677357 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732633 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732819 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732846 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92g8c\" (UniqueName: \"kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732884 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732916 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.732997 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.733027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs\") pod \"031bbc58-0099-459e-836e-e1c58bd86f4a\" (UID: \"031bbc58-0099-459e-836e-e1c58bd86f4a\") " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.745786 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.745806 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs" (OuterVolumeSpecName: "logs") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.749309 4687 scope.go:117] "RemoveContainer" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" Mar 14 09:20:32 crc kubenswrapper[4687]: E0314 09:20:32.759625 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.772517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts" (OuterVolumeSpecName: "scripts") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.794604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.820655 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c" (OuterVolumeSpecName: "kube-api-access-92g8c") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "kube-api-access-92g8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.820767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837084 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837144 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837160 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92g8c\" (UniqueName: \"kubernetes.io/projected/031bbc58-0099-459e-836e-e1c58bd86f4a-kube-api-access-92g8c\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837181 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837196 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/031bbc58-0099-459e-836e-e1c58bd86f4a-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.837207 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.962424 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.962706 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerName="watcher-applier" containerID="cri-o://d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" gracePeriod=30 Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.985822 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.988870 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.991975 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.992191 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api-log" containerID="cri-o://069f96fb16f54c0be976c0cd8a345e3e2baf9f2e59692132df60655006fe1445" gracePeriod=30 Mar 14 09:20:32 crc kubenswrapper[4687]: I0314 09:20:32.992285 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api" containerID="cri-o://819059fba7527cc779425d8dc2093dff53db9b3069ed9d5f60b6474530bea4c4" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.029845 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.041885 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.048120 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data" (OuterVolumeSpecName: "config-data") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.049911 4687 generic.go:334] "Generic (PLEG): container finished" podID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerID="d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74" exitCode=0 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.049971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerDied","Data":"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74"} Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.049996 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"031bbc58-0099-459e-836e-e1c58bd86f4a","Type":"ContainerDied","Data":"5680101488bc53437f606ecbe80e1d73530e5aecd97732a05b6efc986b959764"} Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.050011 4687 scope.go:117] "RemoveContainer" containerID="d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.050116 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.058412 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d6889c85c-hl9hs" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.068485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "031bbc58-0099-459e-836e-e1c58bd86f4a" (UID: "031bbc58-0099-459e-836e-e1c58bd86f4a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.079572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerStarted","Data":"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb"} Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.079749 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-central-agent" containerID="cri-o://e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.080005 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.080044 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="proxy-httpd" containerID="cri-o://7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.080084 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="sg-core" containerID="cri-o://d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.080122 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-notification-agent" containerID="cri-o://8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.122588 4687 generic.go:334] "Generic (PLEG): container finished" podID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerID="a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f" exitCode=137 Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.123754 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.123918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerDied","Data":"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f"} Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.123942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d78b1443-7ac5-420b-b9de-d5a6c957948c","Type":"ContainerDied","Data":"91575f2fee03d259fbc5d3c0cc52fd974924bc3f27903a5b1659ec49534603b7"} Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147089 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjfg6\" (UniqueName: \"kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147284 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147355 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147399 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147463 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.147486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs\") pod \"d78b1443-7ac5-420b-b9de-d5a6c957948c\" (UID: \"d78b1443-7ac5-420b-b9de-d5a6c957948c\") " Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.148054 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.148075 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/031bbc58-0099-459e-836e-e1c58bd86f4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.151120 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs" (OuterVolumeSpecName: "logs") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.153678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6" (OuterVolumeSpecName: "kube-api-access-vjfg6") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "kube-api-access-vjfg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.153738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.160855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.166899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts" (OuterVolumeSpecName: "scripts") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.193308 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.220363 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.195545618 podStartE2EDuration="7.220324572s" podCreationTimestamp="2026-03-14 09:20:26 +0000 UTC" firstStartedPulling="2026-03-14 09:20:27.064023894 +0000 UTC m=+1412.052264269" lastFinishedPulling="2026-03-14 09:20:32.088802838 +0000 UTC m=+1417.077043223" observedRunningTime="2026-03-14 09:20:33.203975249 +0000 UTC m=+1418.192215624" watchObservedRunningTime="2026-03-14 09:20:33.220324572 +0000 UTC m=+1418.208564947" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.239594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data" (OuterVolumeSpecName: "config-data") pod "d78b1443-7ac5-420b-b9de-d5a6c957948c" (UID: "d78b1443-7ac5-420b-b9de-d5a6c957948c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250485 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjfg6\" (UniqueName: \"kubernetes.io/projected/d78b1443-7ac5-420b-b9de-d5a6c957948c-kube-api-access-vjfg6\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250518 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250527 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250535 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d78b1443-7ac5-420b-b9de-d5a6c957948c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250543 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250551 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78b1443-7ac5-420b-b9de-d5a6c957948c-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.250560 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78b1443-7ac5-420b-b9de-d5a6c957948c-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.382049 4687 scope.go:117] "RemoveContainer" containerID="8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.411109 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.422360 4687 scope.go:117] "RemoveContainer" containerID="d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.428201 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.428401 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74\": container with ID starting with d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74 not found: ID does not exist" containerID="d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.428447 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74"} err="failed to get container status \"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74\": rpc error: code = NotFound desc = could not find container \"d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74\": container with ID starting with d0b885f159c1b8750ee5bddd95b268588b45366070def9c7d92f25199e739c74 not found: ID does not exist" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.428478 4687 scope.go:117] "RemoveContainer" containerID="8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.431244 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc\": container with ID starting with 8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc not found: ID does not exist" containerID="8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.431293 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc"} err="failed to get container status \"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc\": rpc error: code = NotFound desc = could not find container \"8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc\": container with ID starting with 8b4c25397f0c77e36632fa630e128f25683cac63d69fcc934ed80561236525cc not found: ID does not exist" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.431321 4687 scope.go:117] "RemoveContainer" containerID="a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.445180 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.451206 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.451698 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-httpd" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.451721 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-httpd" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.451732 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-log" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.451739 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-log" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.451757 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api-log" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.451765 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api-log" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.451793 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.451802 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.452020 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api-log" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.452040 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-log" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.452054 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" containerName="cinder-api" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.452076 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" containerName="glance-httpd" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.455659 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.456303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.461463 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.461478 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.474651 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.474718 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerName="watcher-applier" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.495510 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.520273 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.534421 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.553460 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.555483 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8vj\" (UniqueName: \"kubernetes.io/projected/40393804-849f-448b-a65e-39e17e2f84cb-kube-api-access-8f8vj\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559373 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559417 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-logs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559605 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.559753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.562936 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.563138 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.563249 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.563561 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.601880 4687 scope.go:117] "RemoveContainer" containerID="d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661289 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-logs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661415 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f02a356-a61c-43f4-af36-acbb8e11e187-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661437 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv465\" (UniqueName: \"kubernetes.io/projected/2f02a356-a61c-43f4-af36-acbb8e11e187-kube-api-access-rv465\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-scripts\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f02a356-a61c-43f4-af36-acbb8e11e187-logs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8vj\" (UniqueName: \"kubernetes.io/projected/40393804-849f-448b-a65e-39e17e2f84cb-kube-api-access-8f8vj\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.661751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.662222 4687 scope.go:117] "RemoveContainer" containerID="a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.662460 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.662744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-logs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.663059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40393804-849f-448b-a65e-39e17e2f84cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.664033 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f\": container with ID starting with a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f not found: ID does not exist" containerID="a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.664075 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f"} err="failed to get container status \"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f\": rpc error: code = NotFound desc = could not find container \"a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f\": container with ID starting with a291c9e06adb55dfcc4ac40c60c117a37ac847f88eb8f9693759b28611a6944f not found: ID does not exist" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.664103 4687 scope.go:117] "RemoveContainer" containerID="d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.667813 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: E0314 09:20:33.667970 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae\": container with ID starting with d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae not found: ID does not exist" containerID="d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.668009 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae"} err="failed to get container status \"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae\": rpc error: code = NotFound desc = could not find container \"d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae\": container with ID starting with d803c89fd2970aa12ee06c5f3f819b87c3ac3a41af7f7f6c19558953159316ae not found: ID does not exist" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.668587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.671548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.675444 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40393804-849f-448b-a65e-39e17e2f84cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.687090 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8vj\" (UniqueName: \"kubernetes.io/projected/40393804-849f-448b-a65e-39e17e2f84cb-kube-api-access-8f8vj\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.691827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"40393804-849f-448b-a65e-39e17e2f84cb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.748499 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031bbc58-0099-459e-836e-e1c58bd86f4a" path="/var/lib/kubelet/pods/031bbc58-0099-459e-836e-e1c58bd86f4a/volumes" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.749316 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78b1443-7ac5-420b-b9de-d5a6c957948c" path="/var/lib/kubelet/pods/d78b1443-7ac5-420b-b9de-d5a6c957948c/volumes" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.763850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.763900 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv465\" (UniqueName: \"kubernetes.io/projected/2f02a356-a61c-43f4-af36-acbb8e11e187-kube-api-access-rv465\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.763970 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-scripts\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.763990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f02a356-a61c-43f4-af36-acbb8e11e187-logs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.764069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.764618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f02a356-a61c-43f4-af36-acbb8e11e187-logs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.764701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.764741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.765351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.765845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f02a356-a61c-43f4-af36-acbb8e11e187-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.766058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f02a356-a61c-43f4-af36-acbb8e11e187-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.768294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.768941 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.769671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.769982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.769983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-scripts\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.773364 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f02a356-a61c-43f4-af36-acbb8e11e187-config-data\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.790375 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv465\" (UniqueName: \"kubernetes.io/projected/2f02a356-a61c-43f4-af36-acbb8e11e187-kube-api-access-rv465\") pod \"cinder-api-0\" (UID: \"2f02a356-a61c-43f4-af36-acbb8e11e187\") " pod="openstack/cinder-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.804091 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:20:33 crc kubenswrapper[4687]: I0314 09:20:33.879234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.136942 4687 generic.go:334] "Generic (PLEG): container finished" podID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerID="d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" exitCode=0 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.136980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4afadab3-26bf-47da-9b78-cbe30944d20f","Type":"ContainerDied","Data":"d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145827 4687 generic.go:334] "Generic (PLEG): container finished" podID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerID="7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb" exitCode=0 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145849 4687 generic.go:334] "Generic (PLEG): container finished" podID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerID="d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e" exitCode=2 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145856 4687 generic.go:334] "Generic (PLEG): container finished" podID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerID="8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c" exitCode=0 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerDied","Data":"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145926 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerDied","Data":"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.145938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerDied","Data":"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.153475 4687 generic.go:334] "Generic (PLEG): container finished" podID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerID="819059fba7527cc779425d8dc2093dff53db9b3069ed9d5f60b6474530bea4c4" exitCode=0 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.153498 4687 generic.go:334] "Generic (PLEG): container finished" podID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerID="069f96fb16f54c0be976c0cd8a345e3e2baf9f2e59692132df60655006fe1445" exitCode=143 Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.153540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerDied","Data":"819059fba7527cc779425d8dc2093dff53db9b3069ed9d5f60b6474530bea4c4"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.153566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerDied","Data":"069f96fb16f54c0be976c0cd8a345e3e2baf9f2e59692132df60655006fe1445"} Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.392714 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.441889 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.485724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle\") pod \"4afadab3-26bf-47da-9b78-cbe30944d20f\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.485897 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms9pp\" (UniqueName: \"kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp\") pod \"4afadab3-26bf-47da-9b78-cbe30944d20f\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.486062 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs\") pod \"4afadab3-26bf-47da-9b78-cbe30944d20f\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.486117 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data\") pod \"4afadab3-26bf-47da-9b78-cbe30944d20f\" (UID: \"4afadab3-26bf-47da-9b78-cbe30944d20f\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.505276 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs" (OuterVolumeSpecName: "logs") pod "4afadab3-26bf-47da-9b78-cbe30944d20f" (UID: "4afadab3-26bf-47da-9b78-cbe30944d20f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.505381 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp" (OuterVolumeSpecName: "kube-api-access-ms9pp") pod "4afadab3-26bf-47da-9b78-cbe30944d20f" (UID: "4afadab3-26bf-47da-9b78-cbe30944d20f"). InnerVolumeSpecName "kube-api-access-ms9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.550560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.566661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4afadab3-26bf-47da-9b78-cbe30944d20f" (UID: "4afadab3-26bf-47da-9b78-cbe30944d20f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.567111 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.596743 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4afadab3-26bf-47da-9b78-cbe30944d20f-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.596786 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.596803 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms9pp\" (UniqueName: \"kubernetes.io/projected/4afadab3-26bf-47da-9b78-cbe30944d20f-kube-api-access-ms9pp\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.606977 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data" (OuterVolumeSpecName: "config-data") pod "4afadab3-26bf-47da-9b78-cbe30944d20f" (UID: "4afadab3-26bf-47da-9b78-cbe30944d20f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698135 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698217 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698483 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698602 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698678 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzlw8\" (UniqueName: \"kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.698755 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs\") pod \"266d9643-02bf-4a10-b3ba-fa6706150eb3\" (UID: \"266d9643-02bf-4a10-b3ba-fa6706150eb3\") " Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.699217 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afadab3-26bf-47da-9b78-cbe30944d20f-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.699827 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs" (OuterVolumeSpecName: "logs") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.705042 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8" (OuterVolumeSpecName: "kube-api-access-xzlw8") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "kube-api-access-xzlw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.741567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.757895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.763371 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data" (OuterVolumeSpecName: "config-data") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.796423 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.800978 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.801020 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.801035 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.801045 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/266d9643-02bf-4a10-b3ba-fa6706150eb3-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.801056 4687 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.801068 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzlw8\" (UniqueName: \"kubernetes.io/projected/266d9643-02bf-4a10-b3ba-fa6706150eb3-kube-api-access-xzlw8\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.815230 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "266d9643-02bf-4a10-b3ba-fa6706150eb3" (UID: "266d9643-02bf-4a10-b3ba-fa6706150eb3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:34 crc kubenswrapper[4687]: I0314 09:20:34.903776 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/266d9643-02bf-4a10-b3ba-fa6706150eb3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.172275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"266d9643-02bf-4a10-b3ba-fa6706150eb3","Type":"ContainerDied","Data":"c71c9f3f61cc72a12e1bc1bdf6455213b52bfa9a9f4bf75891e403fb844da31e"} Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.172350 4687 scope.go:117] "RemoveContainer" containerID="819059fba7527cc779425d8dc2093dff53db9b3069ed9d5f60b6474530bea4c4" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.172498 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.179780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f02a356-a61c-43f4-af36-acbb8e11e187","Type":"ContainerStarted","Data":"5a8dd33dce5d03008acf90a3fd28b82c8b807f178393a96f0c71c9842ab19383"} Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.183222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4afadab3-26bf-47da-9b78-cbe30944d20f","Type":"ContainerDied","Data":"3c90d48151eaef2249b123006c02711d9054668056e11519b31cd0b6dec29684"} Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.183313 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.204645 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40393804-849f-448b-a65e-39e17e2f84cb","Type":"ContainerStarted","Data":"d565dab5e37cc82a900b5eaa8a61ac7c40e5202d40f02ffe97139f6f42bb4fff"} Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.204692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40393804-849f-448b-a65e-39e17e2f84cb","Type":"ContainerStarted","Data":"d29be1b90ef19a8393d67cd8bf1d399b3aee218a3031df82c6190400489b0523"} Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.370530 4687 scope.go:117] "RemoveContainer" containerID="069f96fb16f54c0be976c0cd8a345e3e2baf9f2e59692132df60655006fe1445" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.401453 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.417804 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.418685 4687 scope.go:117] "RemoveContainer" containerID="d3619cc74efdd02256cefc428b9a200490fe7811c243ccd3c9ba07d24f93f5ee" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.439904 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.453751 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.470414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: E0314 09:20:35.470928 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api-log" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.470951 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api-log" Mar 14 09:20:35 crc kubenswrapper[4687]: E0314 09:20:35.470968 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.470975 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api" Mar 14 09:20:35 crc kubenswrapper[4687]: E0314 09:20:35.470988 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerName="watcher-applier" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.470994 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerName="watcher-applier" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.471162 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api-log" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.471176 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" containerName="watcher-api" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.471198 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" containerName="watcher-applier" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.472398 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.479203 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.484567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.484893 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.484954 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.485410 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.499385 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.530396 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.548721 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625057 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-config-data\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625434 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625530 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-public-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625554 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6qg\" (UniqueName: \"kubernetes.io/projected/41e97271-7c86-445b-9af3-4ff4d74a8c84-kube-api-access-mx6qg\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61680d0d-2191-4534-bdaf-0032b9ebe805-logs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625693 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkq9\" (UniqueName: \"kubernetes.io/projected/61680d0d-2191-4534-bdaf-0032b9ebe805-kube-api-access-pjkq9\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625797 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e97271-7c86-445b-9af3-4ff4d74a8c84-logs\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.625832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-config-data\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-config-data\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-config-data\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729807 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-public-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx6qg\" (UniqueName: \"kubernetes.io/projected/41e97271-7c86-445b-9af3-4ff4d74a8c84-kube-api-access-mx6qg\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61680d0d-2191-4534-bdaf-0032b9ebe805-logs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.729989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.730018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.730041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjkq9\" (UniqueName: \"kubernetes.io/projected/61680d0d-2191-4534-bdaf-0032b9ebe805-kube-api-access-pjkq9\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.730073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e97271-7c86-445b-9af3-4ff4d74a8c84-logs\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.730574 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e97271-7c86-445b-9af3-4ff4d74a8c84-logs\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.734267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61680d0d-2191-4534-bdaf-0032b9ebe805-logs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.737081 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.739360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.738432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-config-data\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.743389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-public-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.744836 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e97271-7c86-445b-9af3-4ff4d74a8c84-config-data\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.749623 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.749859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61680d0d-2191-4534-bdaf-0032b9ebe805-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.752970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjkq9\" (UniqueName: \"kubernetes.io/projected/61680d0d-2191-4534-bdaf-0032b9ebe805-kube-api-access-pjkq9\") pod \"watcher-api-0\" (UID: \"61680d0d-2191-4534-bdaf-0032b9ebe805\") " pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.753164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx6qg\" (UniqueName: \"kubernetes.io/projected/41e97271-7c86-445b-9af3-4ff4d74a8c84-kube-api-access-mx6qg\") pod \"watcher-applier-0\" (UID: \"41e97271-7c86-445b-9af3-4ff4d74a8c84\") " pod="openstack/watcher-applier-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.763275 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266d9643-02bf-4a10-b3ba-fa6706150eb3" path="/var/lib/kubelet/pods/266d9643-02bf-4a10-b3ba-fa6706150eb3/volumes" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.764502 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afadab3-26bf-47da-9b78-cbe30944d20f" path="/var/lib/kubelet/pods/4afadab3-26bf-47da-9b78-cbe30944d20f/volumes" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.864197 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:20:35 crc kubenswrapper[4687]: I0314 09:20:35.965739 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:36 crc kubenswrapper[4687]: I0314 09:20:36.254490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40393804-849f-448b-a65e-39e17e2f84cb","Type":"ContainerStarted","Data":"05adb513fca85a04bf3144a0c301222f61262101ae2d0dc4b748808d148e33cb"} Mar 14 09:20:36 crc kubenswrapper[4687]: I0314 09:20:36.268395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f02a356-a61c-43f4-af36-acbb8e11e187","Type":"ContainerStarted","Data":"63c3f795ddf4eb8aa9fbd1004a48ced45098c72e8a043c6aae58bbd5a48e9e22"} Mar 14 09:20:36 crc kubenswrapper[4687]: I0314 09:20:36.294938 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.294916708 podStartE2EDuration="3.294916708s" podCreationTimestamp="2026-03-14 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:36.278133924 +0000 UTC m=+1421.266374299" watchObservedRunningTime="2026-03-14 09:20:36.294916708 +0000 UTC m=+1421.283157073" Mar 14 09:20:36 crc kubenswrapper[4687]: I0314 09:20:36.331973 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:36 crc kubenswrapper[4687]: I0314 09:20:36.575937 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:36 crc kubenswrapper[4687]: W0314 09:20:36.588110 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41e97271_7c86_445b_9af3_4ff4d74a8c84.slice/crio-ca40a748b3ba194c70e6733568afefe63407dea1ebb29a5467803e30892d592b WatchSource:0}: Error finding container ca40a748b3ba194c70e6733568afefe63407dea1ebb29a5467803e30892d592b: Status 404 returned error can't find the container with id ca40a748b3ba194c70e6733568afefe63407dea1ebb29a5467803e30892d592b Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.292932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f02a356-a61c-43f4-af36-acbb8e11e187","Type":"ContainerStarted","Data":"5d06bef1ca1dee4a377473f55d0ff1f880aeeebb7037da3e27a9b8be34571734"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.295037 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.328319 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.328295329 podStartE2EDuration="4.328295329s" podCreationTimestamp="2026-03-14 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:37.315672237 +0000 UTC m=+1422.303912612" watchObservedRunningTime="2026-03-14 09:20:37.328295329 +0000 UTC m=+1422.316535704" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.330679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"61680d0d-2191-4534-bdaf-0032b9ebe805","Type":"ContainerStarted","Data":"5a05ae6f4088d2029033a0f7ef67d7aac6317f9cf849d26c171ce9b8fe91e2f2"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.330735 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"61680d0d-2191-4534-bdaf-0032b9ebe805","Type":"ContainerStarted","Data":"92ac2881a096631f3a8aef470d0e1621132e0717a55d6cffd76672b920ca86c2"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.330748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"61680d0d-2191-4534-bdaf-0032b9ebe805","Type":"ContainerStarted","Data":"b8c7bdbd15c10f63e2e0503fac66d3c5bc2dcbe05241b626ec4d2553d08cd049"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.331063 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.333397 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"41e97271-7c86-445b-9af3-4ff4d74a8c84","Type":"ContainerStarted","Data":"33b7e172e675b168c54178e0ae6e57771f3c74b746c2150e8e3922128a96e099"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.333430 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"41e97271-7c86-445b-9af3-4ff4d74a8c84","Type":"ContainerStarted","Data":"ca40a748b3ba194c70e6733568afefe63407dea1ebb29a5467803e30892d592b"} Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.361597 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.361571611 podStartE2EDuration="2.361571611s" podCreationTimestamp="2026-03-14 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:37.359972621 +0000 UTC m=+1422.348212996" watchObservedRunningTime="2026-03-14 09:20:37.361571611 +0000 UTC m=+1422.349811986" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.388594 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.388576078 podStartE2EDuration="2.388576078s" podCreationTimestamp="2026-03-14 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:37.385921392 +0000 UTC m=+1422.374161767" watchObservedRunningTime="2026-03-14 09:20:37.388576078 +0000 UTC m=+1422.376816443" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.736711 4687 scope.go:117] "RemoveContainer" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" Mar 14 09:20:37 crc kubenswrapper[4687]: E0314 09:20:37.737239 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.932537 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mn87f"] Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.934656 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.944208 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mn87f"] Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.988293 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:37 crc kubenswrapper[4687]: I0314 09:20:37.988727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qf5n\" (UniqueName: \"kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.001732 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090285 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090394 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090443 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090543 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5bdc\" (UniqueName: \"kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090611 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml\") pod \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\" (UID: \"ceb9b408-bcf8-49f9-bbcb-43aa386b9134\") " Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.090905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qf5n\" (UniqueName: \"kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.091027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.091753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.091764 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.091786 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.111002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts" (OuterVolumeSpecName: "scripts") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.121376 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc" (OuterVolumeSpecName: "kube-api-access-b5bdc") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "kube-api-access-b5bdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.124675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qf5n\" (UniqueName: \"kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n\") pod \"nova-api-db-create-mn87f\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168259 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ef00-account-create-update-smxvb"] Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.168717 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-notification-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168728 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-notification-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.168745 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="proxy-httpd" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168751 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="proxy-httpd" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.168761 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-central-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168767 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-central-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.168779 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="sg-core" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168784 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="sg-core" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.168992 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="proxy-httpd" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.169005 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-central-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.169014 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="sg-core" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.169030 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerName="ceilometer-notification-agent" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.176159 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.182964 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.204864 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.204892 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.204903 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.204912 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5bdc\" (UniqueName: \"kubernetes.io/projected/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-kube-api-access-b5bdc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.214114 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ef00-account-create-update-smxvb"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.248870 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.303395 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2b6tg"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.304667 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.308722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclvl\" (UniqueName: \"kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.308802 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.308898 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.323223 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.359879 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2b6tg"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.407461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data" (OuterVolumeSpecName: "config-data") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.419492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.419925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclvl\" (UniqueName: \"kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.420098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776mj\" (UniqueName: \"kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.420156 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.421687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb9b408-bcf8-49f9-bbcb-43aa386b9134" (UID: "ceb9b408-bcf8-49f9-bbcb-43aa386b9134"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.423783 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.425311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.442694 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-08a7-account-create-update-drm7n"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.450273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclvl\" (UniqueName: \"kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl\") pod \"nova-api-ef00-account-create-update-smxvb\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.457304 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.459435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.477528 4687 generic.go:334] "Generic (PLEG): container finished" podID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" containerID="e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1" exitCode=0 Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.478416 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerDied","Data":"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1"} Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.478513 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb9b408-bcf8-49f9-bbcb-43aa386b9134","Type":"ContainerDied","Data":"8f43346fece4ae75840b941e123bb1d4e529e54d6dafa224c99f553d32a13362"} Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.478534 4687 scope.go:117] "RemoveContainer" containerID="7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.479083 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.485062 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.503576 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-08a7-account-create-update-drm7n"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.526485 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-776mj\" (UniqueName: \"kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.526576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.526613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5g9\" (UniqueName: \"kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.526826 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.532890 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb9b408-bcf8-49f9-bbcb-43aa386b9134-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.534959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.589396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-776mj\" (UniqueName: \"kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj\") pod \"nova-cell0-db-create-2b6tg\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.622457 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.624184 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mxbml"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.625840 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.637459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.637671 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d5g9\" (UniqueName: \"kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.640697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.668460 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mxbml"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.691965 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d5g9\" (UniqueName: \"kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9\") pod \"nova-cell0-08a7-account-create-update-drm7n\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.728379 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f136-account-create-update-xxfd6"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.729704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.734322 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.735138 4687 scope.go:117] "RemoveContainer" containerID="d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.743777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndcw\" (UniqueName: \"kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.744206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.759540 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f136-account-create-update-xxfd6"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.789697 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.806706 4687 scope.go:117] "RemoveContainer" containerID="8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.821470 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.845545 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.845788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.845910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndcw\" (UniqueName: \"kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.846052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9c5\" (UniqueName: \"kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.846627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.869141 4687 scope.go:117] "RemoveContainer" containerID="e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.869265 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.872129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.879709 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.879983 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.880642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndcw\" (UniqueName: \"kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw\") pod \"nova-cell1-db-create-mxbml\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.892588 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.944027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.947847 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.947902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.947980 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbhp\" (UniqueName: \"kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948049 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn9c5\" (UniqueName: \"kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948102 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948128 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.948522 4687 scope.go:117] "RemoveContainer" containerID="7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.949248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.950692 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb\": container with ID starting with 7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb not found: ID does not exist" containerID="7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.950729 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb"} err="failed to get container status \"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb\": rpc error: code = NotFound desc = could not find container \"7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb\": container with ID starting with 7335fdb1a91846622d07e476f5fea451cfb0a8c3c1dda0226a636bab572a80fb not found: ID does not exist" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.950758 4687 scope.go:117] "RemoveContainer" containerID="d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.951197 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e\": container with ID starting with d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e not found: ID does not exist" containerID="d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.951235 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e"} err="failed to get container status \"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e\": rpc error: code = NotFound desc = could not find container \"d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e\": container with ID starting with d73462f4bfcaf5384b2cb17c475c339cacee9194b5c970ca82d7d3282d2ae26e not found: ID does not exist" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.951258 4687 scope.go:117] "RemoveContainer" containerID="8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.951498 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c\": container with ID starting with 8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c not found: ID does not exist" containerID="8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.951516 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c"} err="failed to get container status \"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c\": rpc error: code = NotFound desc = could not find container \"8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c\": container with ID starting with 8d2b8e039e45b747b7979ac2b0c177f4f79290895eef3d313149e9a8b5894b4c not found: ID does not exist" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.951530 4687 scope.go:117] "RemoveContainer" containerID="e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1" Mar 14 09:20:38 crc kubenswrapper[4687]: E0314 09:20:38.951724 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1\": container with ID starting with e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1 not found: ID does not exist" containerID="e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.951741 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1"} err="failed to get container status \"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1\": rpc error: code = NotFound desc = could not find container \"e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1\": container with ID starting with e077e9c0862f128b2e6af945e588f86d7226a78a4209a080ac6b002afee27ba1 not found: ID does not exist" Mar 14 09:20:38 crc kubenswrapper[4687]: I0314 09:20:38.982161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn9c5\" (UniqueName: \"kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5\") pod \"nova-cell1-f136-account-create-update-xxfd6\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.023144 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mn87f"] Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.028966 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.051766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.051849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpbhp\" (UniqueName: \"kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.051922 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.051951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.051991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.052026 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.052077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.052950 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.053042 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.064168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.067516 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.067848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.068004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.068734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.079321 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpbhp\" (UniqueName: \"kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp\") pod \"ceilometer-0\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.217571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.291749 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ef00-account-create-update-smxvb"] Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.328926 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-08a7-account-create-update-drm7n"] Mar 14 09:20:39 crc kubenswrapper[4687]: W0314 09:20:39.347388 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b958e37_c13b_438b_8a4e_04750c0dafed.slice/crio-a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4 WatchSource:0}: Error finding container a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4: Status 404 returned error can't find the container with id a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4 Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.451681 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2b6tg"] Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.529671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2b6tg" event={"ID":"c8524aad-2294-42c6-9b0e-3c2a9ad79954","Type":"ContainerStarted","Data":"73d68018e7ecde09e5e55cdd6ba09ff32c595dff224e80eece2697ac16d17de4"} Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.532458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mn87f" event={"ID":"db421c84-d478-4dfa-afad-36220ec7b430","Type":"ContainerStarted","Data":"ef68f0830992babb9605369cd1da697be1cf54312dc520ab80317074072f50a8"} Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.537951 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef00-account-create-update-smxvb" event={"ID":"369912db-6ae1-47e5-b92b-842730ab4379","Type":"ContainerStarted","Data":"1ca0e6b256dab9379ce4ef9f1cbed84e4fa7d25fdf408c1eaf76c0031ec46646"} Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.555745 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" event={"ID":"8b958e37-c13b-438b-8a4e-04750c0dafed","Type":"ContainerStarted","Data":"a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4"} Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.680087 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mxbml"] Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.772514 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb9b408-bcf8-49f9-bbcb-43aa386b9134" path="/var/lib/kubelet/pods/ceb9b408-bcf8-49f9-bbcb-43aa386b9134/volumes" Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.799922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f136-account-create-update-xxfd6"] Mar 14 09:20:39 crc kubenswrapper[4687]: I0314 09:20:39.897402 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:39 crc kubenswrapper[4687]: W0314 09:20:39.905490 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89001a8_3fdb_4c15_9c25_eb140e99ef8e.slice/crio-4f1f60f504d762500dba0ad903bb40ae5fd5af7940e1be5b09c107ae298026ab WatchSource:0}: Error finding container 4f1f60f504d762500dba0ad903bb40ae5fd5af7940e1be5b09c107ae298026ab: Status 404 returned error can't find the container with id 4f1f60f504d762500dba0ad903bb40ae5fd5af7940e1be5b09c107ae298026ab Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.564623 4687 generic.go:334] "Generic (PLEG): container finished" podID="8b958e37-c13b-438b-8a4e-04750c0dafed" containerID="9868c41ed7e30d341a57b02894154b3c08e9139d69dd07636b5db4aa5d149082" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.564945 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" event={"ID":"8b958e37-c13b-438b-8a4e-04750c0dafed","Type":"ContainerDied","Data":"9868c41ed7e30d341a57b02894154b3c08e9139d69dd07636b5db4aa5d149082"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.566972 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e6d8139-e819-4fb5-9b1a-e8903d4a0724" containerID="09cfe380c40ddc5654448330795c9610cf63201eec25e0cbcdc1b9db6f7e8a4c" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.567024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mxbml" event={"ID":"4e6d8139-e819-4fb5-9b1a-e8903d4a0724","Type":"ContainerDied","Data":"09cfe380c40ddc5654448330795c9610cf63201eec25e0cbcdc1b9db6f7e8a4c"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.567045 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mxbml" event={"ID":"4e6d8139-e819-4fb5-9b1a-e8903d4a0724","Type":"ContainerStarted","Data":"1195904f048f3da684f4135ae39679d6980d29a82a0aeb8103571b9f4e7731ed"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.570445 4687 generic.go:334] "Generic (PLEG): container finished" podID="7b9f1369-1580-409d-801b-8da0edcdbedc" containerID="3d44038165bf2a85e3c5245fbe057fce19324262c24b75cdade37988e7487451" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.570582 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" event={"ID":"7b9f1369-1580-409d-801b-8da0edcdbedc","Type":"ContainerDied","Data":"3d44038165bf2a85e3c5245fbe057fce19324262c24b75cdade37988e7487451"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.570602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" event={"ID":"7b9f1369-1580-409d-801b-8da0edcdbedc","Type":"ContainerStarted","Data":"4b097d3e4155fe0080b402d70349a0306258866bcf3c5e22e41dc519eee544b9"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.572700 4687 generic.go:334] "Generic (PLEG): container finished" podID="c8524aad-2294-42c6-9b0e-3c2a9ad79954" containerID="2de57c1c33971ae1c63054d7c456e7aa5c597861e1387055a73a6edae16cc777" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.572742 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2b6tg" event={"ID":"c8524aad-2294-42c6-9b0e-3c2a9ad79954","Type":"ContainerDied","Data":"2de57c1c33971ae1c63054d7c456e7aa5c597861e1387055a73a6edae16cc777"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.574856 4687 generic.go:334] "Generic (PLEG): container finished" podID="db421c84-d478-4dfa-afad-36220ec7b430" containerID="1e0bd4a5d9201ebc188577ad9efe7ebedf717f56c96656c87d5ebfc71d146107" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.574933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mn87f" event={"ID":"db421c84-d478-4dfa-afad-36220ec7b430","Type":"ContainerDied","Data":"1e0bd4a5d9201ebc188577ad9efe7ebedf717f56c96656c87d5ebfc71d146107"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.580513 4687 generic.go:334] "Generic (PLEG): container finished" podID="369912db-6ae1-47e5-b92b-842730ab4379" containerID="49e18307a31f8de78df2ed94f4f273d1cd4998a540c666b3a5eb860b6b48e526" exitCode=0 Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.580585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef00-account-create-update-smxvb" event={"ID":"369912db-6ae1-47e5-b92b-842730ab4379","Type":"ContainerDied","Data":"49e18307a31f8de78df2ed94f4f273d1cd4998a540c666b3a5eb860b6b48e526"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.583213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerStarted","Data":"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.583354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerStarted","Data":"4f1f60f504d762500dba0ad903bb40ae5fd5af7940e1be5b09c107ae298026ab"} Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.865062 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.865504 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.953401 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:20:40 crc kubenswrapper[4687]: I0314 09:20:40.966960 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 14 09:20:41 crc kubenswrapper[4687]: I0314 09:20:41.596399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerStarted","Data":"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b"} Mar 14 09:20:41 crc kubenswrapper[4687]: I0314 09:20:41.596663 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerStarted","Data":"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe"} Mar 14 09:20:41 crc kubenswrapper[4687]: I0314 09:20:41.990987 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:41 crc kubenswrapper[4687]: I0314 09:20:41.991237 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-log" containerID="cri-o://4915c64d332bdf697424c821840524e1b7ace5747307b878c04490adb0b89408" gracePeriod=30 Mar 14 09:20:41 crc kubenswrapper[4687]: I0314 09:20:41.991719 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-httpd" containerID="cri-o://96d1c1bfa20a063ccb43bde90999f15ab5163e4da6555f6328a45698e8a1db72" gracePeriod=30 Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.143648 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.256595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts\") pod \"8b958e37-c13b-438b-8a4e-04750c0dafed\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.256679 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d5g9\" (UniqueName: \"kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9\") pod \"8b958e37-c13b-438b-8a4e-04750c0dafed\" (UID: \"8b958e37-c13b-438b-8a4e-04750c0dafed\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.257510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b958e37-c13b-438b-8a4e-04750c0dafed" (UID: "8b958e37-c13b-438b-8a4e-04750c0dafed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.277974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9" (OuterVolumeSpecName: "kube-api-access-8d5g9") pod "8b958e37-c13b-438b-8a4e-04750c0dafed" (UID: "8b958e37-c13b-438b-8a4e-04750c0dafed"). InnerVolumeSpecName "kube-api-access-8d5g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.359758 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b958e37-c13b-438b-8a4e-04750c0dafed-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.359793 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d5g9\" (UniqueName: \"kubernetes.io/projected/8b958e37-c13b-438b-8a4e-04750c0dafed-kube-api-access-8d5g9\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.459461 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.476569 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.489810 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.501737 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.508676 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.565496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn9c5\" (UniqueName: \"kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5\") pod \"7b9f1369-1580-409d-801b-8da0edcdbedc\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.567987 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qf5n\" (UniqueName: \"kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n\") pod \"db421c84-d478-4dfa-afad-36220ec7b430\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.568155 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rclvl\" (UniqueName: \"kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl\") pod \"369912db-6ae1-47e5-b92b-842730ab4379\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.569954 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts\") pod \"7b9f1369-1580-409d-801b-8da0edcdbedc\" (UID: \"7b9f1369-1580-409d-801b-8da0edcdbedc\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.570017 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts\") pod \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.570054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts\") pod \"db421c84-d478-4dfa-afad-36220ec7b430\" (UID: \"db421c84-d478-4dfa-afad-36220ec7b430\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.570079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-776mj\" (UniqueName: \"kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj\") pod \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\" (UID: \"c8524aad-2294-42c6-9b0e-3c2a9ad79954\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.570239 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts\") pod \"369912db-6ae1-47e5-b92b-842730ab4379\" (UID: \"369912db-6ae1-47e5-b92b-842730ab4379\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.572661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "369912db-6ae1-47e5-b92b-842730ab4379" (UID: "369912db-6ae1-47e5-b92b-842730ab4379"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.573441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b9f1369-1580-409d-801b-8da0edcdbedc" (UID: "7b9f1369-1580-409d-801b-8da0edcdbedc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.573801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db421c84-d478-4dfa-afad-36220ec7b430" (UID: "db421c84-d478-4dfa-afad-36220ec7b430"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.577678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8524aad-2294-42c6-9b0e-3c2a9ad79954" (UID: "c8524aad-2294-42c6-9b0e-3c2a9ad79954"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.585376 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n" (OuterVolumeSpecName: "kube-api-access-2qf5n") pod "db421c84-d478-4dfa-afad-36220ec7b430" (UID: "db421c84-d478-4dfa-afad-36220ec7b430"). InnerVolumeSpecName "kube-api-access-2qf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.593533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5" (OuterVolumeSpecName: "kube-api-access-gn9c5") pod "7b9f1369-1580-409d-801b-8da0edcdbedc" (UID: "7b9f1369-1580-409d-801b-8da0edcdbedc"). InnerVolumeSpecName "kube-api-access-gn9c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.607445 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj" (OuterVolumeSpecName: "kube-api-access-776mj") pod "c8524aad-2294-42c6-9b0e-3c2a9ad79954" (UID: "c8524aad-2294-42c6-9b0e-3c2a9ad79954"). InnerVolumeSpecName "kube-api-access-776mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.616090 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl" (OuterVolumeSpecName: "kube-api-access-rclvl") pod "369912db-6ae1-47e5-b92b-842730ab4379" (UID: "369912db-6ae1-47e5-b92b-842730ab4379"). InnerVolumeSpecName "kube-api-access-rclvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.653084 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.652671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f136-account-create-update-xxfd6" event={"ID":"7b9f1369-1580-409d-801b-8da0edcdbedc","Type":"ContainerDied","Data":"4b097d3e4155fe0080b402d70349a0306258866bcf3c5e22e41dc519eee544b9"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.653213 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b097d3e4155fe0080b402d70349a0306258866bcf3c5e22e41dc519eee544b9" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.655211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2b6tg" event={"ID":"c8524aad-2294-42c6-9b0e-3c2a9ad79954","Type":"ContainerDied","Data":"73d68018e7ecde09e5e55cdd6ba09ff32c595dff224e80eece2697ac16d17de4"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.656725 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d68018e7ecde09e5e55cdd6ba09ff32c595dff224e80eece2697ac16d17de4" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.655389 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2b6tg" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.662084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mn87f" event={"ID":"db421c84-d478-4dfa-afad-36220ec7b430","Type":"ContainerDied","Data":"ef68f0830992babb9605369cd1da697be1cf54312dc520ab80317074072f50a8"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.662127 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef68f0830992babb9605369cd1da697be1cf54312dc520ab80317074072f50a8" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.662195 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mn87f" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.667808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef00-account-create-update-smxvb" event={"ID":"369912db-6ae1-47e5-b92b-842730ab4379","Type":"ContainerDied","Data":"1ca0e6b256dab9379ce4ef9f1cbed84e4fa7d25fdf408c1eaf76c0031ec46646"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.667862 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca0e6b256dab9379ce4ef9f1cbed84e4fa7d25fdf408c1eaf76c0031ec46646" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.667936 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef00-account-create-update-smxvb" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.673238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" event={"ID":"8b958e37-c13b-438b-8a4e-04750c0dafed","Type":"ContainerDied","Data":"a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.673279 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ba04bf874832dbb4280b344704c7ccac94b171292e8d5280b099f2fc7e78a4" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.673384 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-08a7-account-create-update-drm7n" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndcw\" (UniqueName: \"kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw\") pod \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts\") pod \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\" (UID: \"4e6d8139-e819-4fb5-9b1a-e8903d4a0724\") " Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e6d8139-e819-4fb5-9b1a-e8903d4a0724" (UID: "4e6d8139-e819-4fb5-9b1a-e8903d4a0724"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675869 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369912db-6ae1-47e5-b92b-842730ab4379-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675890 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn9c5\" (UniqueName: \"kubernetes.io/projected/7b9f1369-1580-409d-801b-8da0edcdbedc-kube-api-access-gn9c5\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675905 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qf5n\" (UniqueName: \"kubernetes.io/projected/db421c84-d478-4dfa-afad-36220ec7b430-kube-api-access-2qf5n\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675917 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rclvl\" (UniqueName: \"kubernetes.io/projected/369912db-6ae1-47e5-b92b-842730ab4379-kube-api-access-rclvl\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675929 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9f1369-1580-409d-801b-8da0edcdbedc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675939 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8524aad-2294-42c6-9b0e-3c2a9ad79954-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675950 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db421c84-d478-4dfa-afad-36220ec7b430-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.675961 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-776mj\" (UniqueName: \"kubernetes.io/projected/c8524aad-2294-42c6-9b0e-3c2a9ad79954-kube-api-access-776mj\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.678971 4687 generic.go:334] "Generic (PLEG): container finished" podID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerID="4915c64d332bdf697424c821840524e1b7ace5747307b878c04490adb0b89408" exitCode=143 Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.679048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerDied","Data":"4915c64d332bdf697424c821840524e1b7ace5747307b878c04490adb0b89408"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.679541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw" (OuterVolumeSpecName: "kube-api-access-rndcw") pod "4e6d8139-e819-4fb5-9b1a-e8903d4a0724" (UID: "4e6d8139-e819-4fb5-9b1a-e8903d4a0724"). InnerVolumeSpecName "kube-api-access-rndcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.683596 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mxbml" event={"ID":"4e6d8139-e819-4fb5-9b1a-e8903d4a0724","Type":"ContainerDied","Data":"1195904f048f3da684f4135ae39679d6980d29a82a0aeb8103571b9f4e7731ed"} Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.683621 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1195904f048f3da684f4135ae39679d6980d29a82a0aeb8103571b9f4e7731ed" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.683749 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mxbml" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.778035 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndcw\" (UniqueName: \"kubernetes.io/projected/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-kube-api-access-rndcw\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4687]: I0314 09:20:42.778090 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6d8139-e819-4fb5-9b1a-e8903d4a0724-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:43 crc kubenswrapper[4687]: I0314 09:20:43.199753 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:43 crc kubenswrapper[4687]: I0314 09:20:43.805307 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:20:43 crc kubenswrapper[4687]: I0314 09:20:43.805369 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:20:43 crc kubenswrapper[4687]: I0314 09:20:43.849167 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:20:43 crc kubenswrapper[4687]: I0314 09:20:43.852472 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.704304 4687 generic.go:334] "Generic (PLEG): container finished" podID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerID="96d1c1bfa20a063ccb43bde90999f15ab5163e4da6555f6328a45698e8a1db72" exitCode=0 Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.704389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerDied","Data":"96d1c1bfa20a063ccb43bde90999f15ab5163e4da6555f6328a45698e8a1db72"} Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.708666 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerStarted","Data":"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6"} Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.708939 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-central-agent" containerID="cri-o://b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb" gracePeriod=30 Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.709026 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="sg-core" containerID="cri-o://d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b" gracePeriod=30 Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.709087 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="proxy-httpd" containerID="cri-o://0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6" gracePeriod=30 Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.709151 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-notification-agent" containerID="cri-o://658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe" gracePeriod=30 Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.709423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.709498 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.919923 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:44 crc kubenswrapper[4687]: I0314 09:20:44.973637 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.107667749 podStartE2EDuration="6.973610855s" podCreationTimestamp="2026-03-14 09:20:38 +0000 UTC" firstStartedPulling="2026-03-14 09:20:39.90795335 +0000 UTC m=+1424.896193725" lastFinishedPulling="2026-03-14 09:20:43.773896456 +0000 UTC m=+1428.762136831" observedRunningTime="2026-03-14 09:20:44.741807138 +0000 UTC m=+1429.730047503" watchObservedRunningTime="2026-03-14 09:20:44.973610855 +0000 UTC m=+1429.961851230" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.032008 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rck7\" (UniqueName: \"kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.032595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.032816 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.032917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.033049 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.033265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.033418 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.033531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data\") pod \"3af59caa-4445-408d-9c7a-ffed88917fa3\" (UID: \"3af59caa-4445-408d-9c7a-ffed88917fa3\") " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.035719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs" (OuterVolumeSpecName: "logs") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.042649 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.056279 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.057604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts" (OuterVolumeSpecName: "scripts") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.060946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7" (OuterVolumeSpecName: "kube-api-access-7rck7") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "kube-api-access-7rck7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.106550 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.135354 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data" (OuterVolumeSpecName: "config-data") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137617 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137704 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137718 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137735 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rck7\" (UniqueName: \"kubernetes.io/projected/3af59caa-4445-408d-9c7a-ffed88917fa3-kube-api-access-7rck7\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137753 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137764 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3af59caa-4445-408d-9c7a-ffed88917fa3-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.137776 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.163153 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.180048 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3af59caa-4445-408d-9c7a-ffed88917fa3" (UID: "3af59caa-4445-408d-9c7a-ffed88917fa3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.239481 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af59caa-4445-408d-9c7a-ffed88917fa3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.239525 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.725553 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.726166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3af59caa-4445-408d-9c7a-ffed88917fa3","Type":"ContainerDied","Data":"d4fde478e2f6c4ee612d878dab57f700ef5c659ece4e46c53320fe768302452c"} Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.726248 4687 scope.go:117] "RemoveContainer" containerID="96d1c1bfa20a063ccb43bde90999f15ab5163e4da6555f6328a45698e8a1db72" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.734558 4687 generic.go:334] "Generic (PLEG): container finished" podID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerID="0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6" exitCode=0 Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.734757 4687 generic.go:334] "Generic (PLEG): container finished" podID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerID="d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b" exitCode=2 Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.734837 4687 generic.go:334] "Generic (PLEG): container finished" podID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerID="658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe" exitCode=0 Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.735727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerDied","Data":"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6"} Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.735805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerDied","Data":"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b"} Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.735825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerDied","Data":"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe"} Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.811164 4687 scope.go:117] "RemoveContainer" containerID="4915c64d332bdf697424c821840524e1b7ace5747307b878c04490adb0b89408" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.829013 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.852951 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.860542 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861017 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8524aad-2294-42c6-9b0e-3c2a9ad79954" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861081 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8524aad-2294-42c6-9b0e-3c2a9ad79954" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861144 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369912db-6ae1-47e5-b92b-842730ab4379" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861195 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="369912db-6ae1-47e5-b92b-842730ab4379" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861254 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6d8139-e819-4fb5-9b1a-e8903d4a0724" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861312 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6d8139-e819-4fb5-9b1a-e8903d4a0724" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861404 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-httpd" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861459 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-httpd" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861523 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b958e37-c13b-438b-8a4e-04750c0dafed" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861574 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b958e37-c13b-438b-8a4e-04750c0dafed" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861640 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db421c84-d478-4dfa-afad-36220ec7b430" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861704 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db421c84-d478-4dfa-afad-36220ec7b430" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861779 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9f1369-1580-409d-801b-8da0edcdbedc" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861839 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9f1369-1580-409d-801b-8da0edcdbedc" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: E0314 09:20:45.861902 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-log" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.861956 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-log" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862200 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="369912db-6ae1-47e5-b92b-842730ab4379" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862287 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b958e37-c13b-438b-8a4e-04750c0dafed" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862367 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-httpd" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862433 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6d8139-e819-4fb5-9b1a-e8903d4a0724" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862490 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="db421c84-d478-4dfa-afad-36220ec7b430" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862555 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8524aad-2294-42c6-9b0e-3c2a9ad79954" containerName="mariadb-database-create" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862611 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" containerName="glance-log" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.862668 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9f1369-1580-409d-801b-8da0edcdbedc" containerName="mariadb-account-create-update" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.863683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.865937 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.866873 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.866918 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.879046 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.882710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 14 09:20:45 crc kubenswrapper[4687]: I0314 09:20:45.967234 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.001872 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057365 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057390 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057441 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5l8\" (UniqueName: \"kubernetes.io/projected/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-kube-api-access-7p5l8\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.057458 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162822 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162922 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.162966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.163192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5l8\" (UniqueName: \"kubernetes.io/projected/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-kube-api-access-7p5l8\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.163219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.166674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.173866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.174464 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.175895 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.176366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.176903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.195648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.196071 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5l8\" (UniqueName: \"kubernetes.io/projected/cf949dbe-ea29-420f-8e2c-ae02a1980bb9-kube-api-access-7p5l8\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.222350 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf949dbe-ea29-420f-8e2c-ae02a1980bb9\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.399546 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.481019 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.572917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpbhp\" (UniqueName: \"kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573350 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573385 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573628 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.573651 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd\") pod \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\" (UID: \"f89001a8-3fdb-4c15-9c25-eb140e99ef8e\") " Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.574573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.575360 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.580175 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp" (OuterVolumeSpecName: "kube-api-access-vpbhp") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "kube-api-access-vpbhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.582864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts" (OuterVolumeSpecName: "scripts") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.638623 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.676116 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.676148 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.676160 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpbhp\" (UniqueName: \"kubernetes.io/projected/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-kube-api-access-vpbhp\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.676173 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.676182 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.692992 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.738219 4687 scope.go:117] "RemoveContainer" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.738721 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.778217 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.791473 4687 generic.go:334] "Generic (PLEG): container finished" podID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerID="b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb" exitCode=0 Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.791539 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerDied","Data":"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb"} Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.791563 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f89001a8-3fdb-4c15-9c25-eb140e99ef8e","Type":"ContainerDied","Data":"4f1f60f504d762500dba0ad903bb40ae5fd5af7940e1be5b09c107ae298026ab"} Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.791582 4687 scope.go:117] "RemoveContainer" containerID="0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.792986 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.825421 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data" (OuterVolumeSpecName: "config-data") pod "f89001a8-3fdb-4c15-9c25-eb140e99ef8e" (UID: "f89001a8-3fdb-4c15-9c25-eb140e99ef8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.846871 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.880987 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89001a8-3fdb-4c15-9c25-eb140e99ef8e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.910316 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.923063 4687 scope.go:117] "RemoveContainer" containerID="d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b" Mar 14 09:20:46 crc kubenswrapper[4687]: I0314 09:20:46.972998 4687 scope.go:117] "RemoveContainer" containerID="658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.029631 4687 scope.go:117] "RemoveContainer" containerID="b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.056856 4687 scope.go:117] "RemoveContainer" containerID="0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.057510 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6\": container with ID starting with 0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6 not found: ID does not exist" containerID="0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.057620 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6"} err="failed to get container status \"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6\": rpc error: code = NotFound desc = could not find container \"0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6\": container with ID starting with 0330e1165a45a1727c27aa040d0c41ffdd9db6d8dd3378a44ecd1329b5f1a7b6 not found: ID does not exist" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.057703 4687 scope.go:117] "RemoveContainer" containerID="d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.058083 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b\": container with ID starting with d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b not found: ID does not exist" containerID="d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.058163 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b"} err="failed to get container status \"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b\": rpc error: code = NotFound desc = could not find container \"d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b\": container with ID starting with d151bab4caba5bd224b36110898dc3dbb541baeeb0bacd1eb68218e622f5cc6b not found: ID does not exist" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.058237 4687 scope.go:117] "RemoveContainer" containerID="658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.059538 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe\": container with ID starting with 658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe not found: ID does not exist" containerID="658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.059597 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe"} err="failed to get container status \"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe\": rpc error: code = NotFound desc = could not find container \"658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe\": container with ID starting with 658ce7b694b22a1f7c4e315e4e90a8e1e90ad6a867d6ea56cfab39ee64ad87fe not found: ID does not exist" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.059624 4687 scope.go:117] "RemoveContainer" containerID="b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.060576 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb\": container with ID starting with b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb not found: ID does not exist" containerID="b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.060701 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb"} err="failed to get container status \"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb\": rpc error: code = NotFound desc = could not find container \"b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb\": container with ID starting with b7185bc751b0ba77255430f78fe9fe592d4e5d0e0cc99a9f739a91b1631201fb not found: ID does not exist" Mar 14 09:20:47 crc kubenswrapper[4687]: W0314 09:20:47.123532 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf949dbe_ea29_420f_8e2c_ae02a1980bb9.slice/crio-69f251c1f8fb35d29ff15e71826f088f4809842fdedfccdbd995fb94097d3053 WatchSource:0}: Error finding container 69f251c1f8fb35d29ff15e71826f088f4809842fdedfccdbd995fb94097d3053: Status 404 returned error can't find the container with id 69f251c1f8fb35d29ff15e71826f088f4809842fdedfccdbd995fb94097d3053 Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.152607 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.209520 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.220749 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.236306 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.237033 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="proxy-httpd" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237060 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="proxy-httpd" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.237085 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-central-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237095 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-central-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.237140 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-notification-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237148 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-notification-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: E0314 09:20:47.237163 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="sg-core" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237171 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="sg-core" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237574 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="sg-core" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237612 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-notification-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237634 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="proxy-httpd" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.237647 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" containerName="ceilometer-central-agent" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.240108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.244277 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.253998 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.254233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.301120 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.301471 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.393532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.393615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.393683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdt2l\" (UniqueName: \"kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.393717 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.393907 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.394012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.394124 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496363 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496500 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdt2l\" (UniqueName: \"kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496584 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.496781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.498871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.498996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.503245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.504072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.504492 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.504714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.513195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdt2l\" (UniqueName: \"kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l\") pod \"ceilometer-0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.594558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.765655 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af59caa-4445-408d-9c7a-ffed88917fa3" path="/var/lib/kubelet/pods/3af59caa-4445-408d-9c7a-ffed88917fa3/volumes" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.766854 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89001a8-3fdb-4c15-9c25-eb140e99ef8e" path="/var/lib/kubelet/pods/f89001a8-3fdb-4c15-9c25-eb140e99ef8e/volumes" Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.847483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf949dbe-ea29-420f-8e2c-ae02a1980bb9","Type":"ContainerStarted","Data":"69f251c1f8fb35d29ff15e71826f088f4809842fdedfccdbd995fb94097d3053"} Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.884775 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef"} Mar 14 09:20:47 crc kubenswrapper[4687]: I0314 09:20:47.903260 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.216878 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.746390 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8t9sq"] Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.748060 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.751005 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.751239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.751436 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zk2s9" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.782658 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8t9sq"] Mar 14 09:20:48 crc kubenswrapper[4687]: E0314 09:20:48.797938 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:48 crc kubenswrapper[4687]: E0314 09:20:48.816113 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:48 crc kubenswrapper[4687]: E0314 09:20:48.824551 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:48 crc kubenswrapper[4687]: E0314 09:20:48.824606 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.836503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.836667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.836707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhhv\" (UniqueName: \"kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.836796 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.904827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerStarted","Data":"865d3567bcf2c92db7e231039cb76675dbccf0f5c836c1b9b65d464f09be44c6"} Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.904884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerStarted","Data":"9750bb30b214eafb59fd26d21a89e0985b446558134ec4de4430eed1847fcac5"} Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.906230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf949dbe-ea29-420f-8e2c-ae02a1980bb9","Type":"ContainerStarted","Data":"5db9da320b16b266344e6b9a10a8d5adea540239202f0382e12a9d56450adf68"} Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.939588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.939670 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhhv\" (UniqueName: \"kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.939892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.940024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.952977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.956053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.959837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:48 crc kubenswrapper[4687]: I0314 09:20:48.985017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhhv\" (UniqueName: \"kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv\") pod \"nova-cell0-conductor-db-sync-8t9sq\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.148288 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.734539 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8t9sq"] Mar 14 09:20:49 crc kubenswrapper[4687]: W0314 09:20:49.737357 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod113a938c_1831_439a_ae3c_5fbf7abfbc81.slice/crio-bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b WatchSource:0}: Error finding container bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b: Status 404 returned error can't find the container with id bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.916718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" event={"ID":"113a938c-1831-439a-ae3c-5fbf7abfbc81","Type":"ContainerStarted","Data":"bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b"} Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.920697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf949dbe-ea29-420f-8e2c-ae02a1980bb9","Type":"ContainerStarted","Data":"bd77eb3fb9194cbee852a1629f0ca1551f3ad17dff0124d368c32ff05a55d667"} Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.924348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerStarted","Data":"df4c838df92fb76f13b5b1dc0e8c847a2bd814b11ce4675598e8abbe7e474ea2"} Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.924892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerStarted","Data":"756ed19de753ca91054d4e24cef9483ef99e1944fa58f109718cc83e38d860af"} Mar 14 09:20:49 crc kubenswrapper[4687]: I0314 09:20:49.952554 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.952531766 podStartE2EDuration="4.952531766s" podCreationTimestamp="2026-03-14 09:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:49.941722639 +0000 UTC m=+1434.929963014" watchObservedRunningTime="2026-03-14 09:20:49.952531766 +0000 UTC m=+1434.940772151" Mar 14 09:20:51 crc kubenswrapper[4687]: I0314 09:20:51.959965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerStarted","Data":"3fee86fd71cf355fcc4a4d2276f4a04b092c9fe10cd103f3ba00e7bbcad66d89"} Mar 14 09:20:52 crc kubenswrapper[4687]: I0314 09:20:52.128397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:52 crc kubenswrapper[4687]: I0314 09:20:52.129462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:20:52 crc kubenswrapper[4687]: I0314 09:20:52.737216 4687 scope.go:117] "RemoveContainer" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" Mar 14 09:20:52 crc kubenswrapper[4687]: I0314 09:20:52.987140 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:20:53 crc kubenswrapper[4687]: I0314 09:20:53.016128 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.74505169 podStartE2EDuration="6.01610751s" podCreationTimestamp="2026-03-14 09:20:47 +0000 UTC" firstStartedPulling="2026-03-14 09:20:48.242602973 +0000 UTC m=+1433.230843348" lastFinishedPulling="2026-03-14 09:20:51.513658793 +0000 UTC m=+1436.501899168" observedRunningTime="2026-03-14 09:20:53.007879847 +0000 UTC m=+1437.996120232" watchObservedRunningTime="2026-03-14 09:20:53.01610751 +0000 UTC m=+1438.004347885" Mar 14 09:20:53 crc kubenswrapper[4687]: I0314 09:20:53.996770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6"} Mar 14 09:20:54 crc kubenswrapper[4687]: I0314 09:20:54.433215 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:55 crc kubenswrapper[4687]: I0314 09:20:55.005741 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-central-agent" containerID="cri-o://865d3567bcf2c92db7e231039cb76675dbccf0f5c836c1b9b65d464f09be44c6" gracePeriod=30 Mar 14 09:20:55 crc kubenswrapper[4687]: I0314 09:20:55.005800 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-notification-agent" containerID="cri-o://756ed19de753ca91054d4e24cef9483ef99e1944fa58f109718cc83e38d860af" gracePeriod=30 Mar 14 09:20:55 crc kubenswrapper[4687]: I0314 09:20:55.005835 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="sg-core" containerID="cri-o://df4c838df92fb76f13b5b1dc0e8c847a2bd814b11ce4675598e8abbe7e474ea2" gracePeriod=30 Mar 14 09:20:55 crc kubenswrapper[4687]: I0314 09:20:55.005815 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="proxy-httpd" containerID="cri-o://3fee86fd71cf355fcc4a4d2276f4a04b092c9fe10cd103f3ba00e7bbcad66d89" gracePeriod=30 Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024648 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerID="3fee86fd71cf355fcc4a4d2276f4a04b092c9fe10cd103f3ba00e7bbcad66d89" exitCode=0 Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024911 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerID="df4c838df92fb76f13b5b1dc0e8c847a2bd814b11ce4675598e8abbe7e474ea2" exitCode=2 Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024919 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerID="756ed19de753ca91054d4e24cef9483ef99e1944fa58f109718cc83e38d860af" exitCode=0 Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024722 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerDied","Data":"3fee86fd71cf355fcc4a4d2276f4a04b092c9fe10cd103f3ba00e7bbcad66d89"} Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerDied","Data":"df4c838df92fb76f13b5b1dc0e8c847a2bd814b11ce4675598e8abbe7e474ea2"} Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.024966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerDied","Data":"756ed19de753ca91054d4e24cef9483ef99e1944fa58f109718cc83e38d860af"} Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.482003 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.482816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.518814 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:56 crc kubenswrapper[4687]: I0314 09:20:56.536617 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:57 crc kubenswrapper[4687]: I0314 09:20:57.036198 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:57 crc kubenswrapper[4687]: I0314 09:20:57.036250 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:58 crc kubenswrapper[4687]: I0314 09:20:58.051168 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" exitCode=1 Mar 14 09:20:58 crc kubenswrapper[4687]: I0314 09:20:58.051452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef"} Mar 14 09:20:58 crc kubenswrapper[4687]: I0314 09:20:58.051545 4687 scope.go:117] "RemoveContainer" containerID="9d35bfeac7eba0f7678ba32921e71a7e771b8a615d55acae73baa254c6ecca38" Mar 14 09:20:58 crc kubenswrapper[4687]: I0314 09:20:58.052373 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:20:58 crc kubenswrapper[4687]: E0314 09:20:58.052575 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:20:59 crc kubenswrapper[4687]: I0314 09:20:59.400686 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:59 crc kubenswrapper[4687]: I0314 09:20:59.401089 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:59 crc kubenswrapper[4687]: I0314 09:20:59.421675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.095765 4687 generic.go:334] "Generic (PLEG): container finished" podID="eee82ec5-3847-4115-ac3c-5d9590930169" containerID="c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" exitCode=137 Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.096275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerDied","Data":"c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36"} Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.127627 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.127676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.128588 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:21:02 crc kubenswrapper[4687]: E0314 09:21:02.128847 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.220599 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:21:02 crc kubenswrapper[4687]: I0314 09:21:02.220644 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:21:03 crc kubenswrapper[4687]: E0314 09:21:03.229734 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Mar 14 09:21:03 crc kubenswrapper[4687]: E0314 09:21:03.230104 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Mar 14 09:21:03 crc kubenswrapper[4687]: E0314 09:21:03.230253 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:38.102.83.243:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfhhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-8t9sq_openstack(113a938c-1831-439a-ae3c-5fbf7abfbc81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:21:03 crc kubenswrapper[4687]: E0314 09:21:03.231667 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.281782 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.332975 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs\") pod \"eee82ec5-3847-4115-ac3c-5d9590930169\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.333081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data\") pod \"eee82ec5-3847-4115-ac3c-5d9590930169\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.333219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle\") pod \"eee82ec5-3847-4115-ac3c-5d9590930169\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.333256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6mw\" (UniqueName: \"kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw\") pod \"eee82ec5-3847-4115-ac3c-5d9590930169\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.333304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca\") pod \"eee82ec5-3847-4115-ac3c-5d9590930169\" (UID: \"eee82ec5-3847-4115-ac3c-5d9590930169\") " Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.334081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs" (OuterVolumeSpecName: "logs") pod "eee82ec5-3847-4115-ac3c-5d9590930169" (UID: "eee82ec5-3847-4115-ac3c-5d9590930169"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.342173 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw" (OuterVolumeSpecName: "kube-api-access-6g6mw") pod "eee82ec5-3847-4115-ac3c-5d9590930169" (UID: "eee82ec5-3847-4115-ac3c-5d9590930169"). InnerVolumeSpecName "kube-api-access-6g6mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.363299 4687 scope.go:117] "RemoveContainer" containerID="ee644f4326b4b1c2f7264b8d33cebec27bcd3a1a5bb98e2f799a121c1d3d5d0c" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.375476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eee82ec5-3847-4115-ac3c-5d9590930169" (UID: "eee82ec5-3847-4115-ac3c-5d9590930169"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.381144 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "eee82ec5-3847-4115-ac3c-5d9590930169" (UID: "eee82ec5-3847-4115-ac3c-5d9590930169"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.400435 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data" (OuterVolumeSpecName: "config-data") pod "eee82ec5-3847-4115-ac3c-5d9590930169" (UID: "eee82ec5-3847-4115-ac3c-5d9590930169"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.435391 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee82ec5-3847-4115-ac3c-5d9590930169-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.435422 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.435433 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.435442 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6mw\" (UniqueName: \"kubernetes.io/projected/eee82ec5-3847-4115-ac3c-5d9590930169-kube-api-access-6g6mw\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:03 crc kubenswrapper[4687]: I0314 09:21:03.435451 4687 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eee82ec5-3847-4115-ac3c-5d9590930169-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.118913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"eee82ec5-3847-4115-ac3c-5d9590930169","Type":"ContainerDied","Data":"d046b164ff0e22529192f37cb782838563bf4d30b295673ad9aa8bc35b98b3dd"} Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.119156 4687 scope.go:117] "RemoveContainer" containerID="c9fbb76c87a92178d7a8094c0bc7938c42a127f666e19d9fbed1c60c20271c36" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.119024 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.122736 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" exitCode=1 Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.122847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6"} Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.123260 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:21:04 crc kubenswrapper[4687]: E0314 09:21:04.123521 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:21:04 crc kubenswrapper[4687]: E0314 09:21:04.128294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.139632 4687 scope.go:117] "RemoveContainer" containerID="25c02011863bbbc02516c3a9118cefe65b676511834eeef5294d7d4d9ef6620b" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.172533 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.197106 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.214224 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:04 crc kubenswrapper[4687]: E0314 09:21:04.214810 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.214833 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: E0314 09:21:04.214850 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.214858 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.233816 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.233850 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.234588 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.236991 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.239195 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.352082 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr9bx\" (UniqueName: \"kubernetes.io/projected/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-kube-api-access-jr9bx\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.352130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.352528 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.352690 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.352834 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.454974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr9bx\" (UniqueName: \"kubernetes.io/projected/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-kube-api-access-jr9bx\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.455031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.455176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.455239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.455299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.455999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.459727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.466938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.467173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.474233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr9bx\" (UniqueName: \"kubernetes.io/projected/593ae32b-34ff-4ffa-a6b5-e636cf2afd0e-kube-api-access-jr9bx\") pod \"watcher-decision-engine-0\" (UID: \"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:04 crc kubenswrapper[4687]: I0314 09:21:04.558778 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:05 crc kubenswrapper[4687]: I0314 09:21:05.020671 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:05 crc kubenswrapper[4687]: I0314 09:21:05.141754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e","Type":"ContainerStarted","Data":"b66b97a97466493f513acd9edab8ad85d1456a54fc21f343f6c4a11972f73233"} Mar 14 09:21:05 crc kubenswrapper[4687]: I0314 09:21:05.750539 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" path="/var/lib/kubelet/pods/eee82ec5-3847-4115-ac3c-5d9590930169/volumes" Mar 14 09:21:06 crc kubenswrapper[4687]: I0314 09:21:06.162072 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593ae32b-34ff-4ffa-a6b5-e636cf2afd0e","Type":"ContainerStarted","Data":"0442184325e4d9dd1787498d720b5c675658e89e6e845d6bb7727916b35be0ac"} Mar 14 09:21:06 crc kubenswrapper[4687]: I0314 09:21:06.207064 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.207046446 podStartE2EDuration="2.207046446s" podCreationTimestamp="2026-03-14 09:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:06.201838187 +0000 UTC m=+1451.190078562" watchObservedRunningTime="2026-03-14 09:21:06.207046446 +0000 UTC m=+1451.195286821" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.181681 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerID="865d3567bcf2c92db7e231039cb76675dbccf0f5c836c1b9b65d464f09be44c6" exitCode=0 Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.181734 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerDied","Data":"865d3567bcf2c92db7e231039cb76675dbccf0f5c836c1b9b65d464f09be44c6"} Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.182194 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0","Type":"ContainerDied","Data":"9750bb30b214eafb59fd26d21a89e0985b446558134ec4de4430eed1847fcac5"} Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.182207 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9750bb30b214eafb59fd26d21a89e0985b446558134ec4de4430eed1847fcac5" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.199313 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.223792 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.223931 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdt2l\" (UniqueName: \"kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224050 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224146 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd\") pod \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\" (UID: \"7f4f7f73-7812-47be-a826-b7bbbc6d9ad0\") " Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.224865 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.225087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.232621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts" (OuterVolumeSpecName: "scripts") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.232692 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l" (OuterVolumeSpecName: "kube-api-access-hdt2l") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "kube-api-access-hdt2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.260088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.317173 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327188 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdt2l\" (UniqueName: \"kubernetes.io/projected/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-kube-api-access-hdt2l\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327218 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327228 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327240 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327250 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.327258 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.336561 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data" (OuterVolumeSpecName: "config-data") pod "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" (UID: "7f4f7f73-7812-47be-a826-b7bbbc6d9ad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:08 crc kubenswrapper[4687]: I0314 09:21:08.429432 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.191818 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.225140 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.236797 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.249511 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.249912 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="sg-core" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.249929 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="sg-core" Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.249939 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-notification-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.249945 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-notification-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.249958 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="proxy-httpd" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.249963 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="proxy-httpd" Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.249982 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-central-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.249988 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-central-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.249998 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250004 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: E0314 09:21:09.250015 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250022 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250211 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250220 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee82ec5-3847-4115-ac3c-5d9590930169" containerName="watcher-decision-engine" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250237 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="sg-core" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250249 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="proxy-httpd" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250259 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-notification-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.250269 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" containerName="ceilometer-central-agent" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.257246 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.262185 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.265025 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.265726 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349892 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349946 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.349977 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.350047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.452270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.452590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.452700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.452824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.452943 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.453063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.453195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.453188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.453073 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.457990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.458028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.458232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.461086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.473575 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk\") pod \"ceilometer-0\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.575201 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:09 crc kubenswrapper[4687]: I0314 09:21:09.767901 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4f7f73-7812-47be-a826-b7bbbc6d9ad0" path="/var/lib/kubelet/pods/7f4f7f73-7812-47be-a826-b7bbbc6d9ad0/volumes" Mar 14 09:21:10 crc kubenswrapper[4687]: I0314 09:21:10.040226 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:10 crc kubenswrapper[4687]: W0314 09:21:10.042984 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a4fbb3_cd3b_4d76_ba9b_56fd072fdb40.slice/crio-096a46f5d4c0d8f54f9fe78465745af4717857af68586d539b82ac70140caaed WatchSource:0}: Error finding container 096a46f5d4c0d8f54f9fe78465745af4717857af68586d539b82ac70140caaed: Status 404 returned error can't find the container with id 096a46f5d4c0d8f54f9fe78465745af4717857af68586d539b82ac70140caaed Mar 14 09:21:10 crc kubenswrapper[4687]: I0314 09:21:10.201195 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerStarted","Data":"096a46f5d4c0d8f54f9fe78465745af4717857af68586d539b82ac70140caaed"} Mar 14 09:21:11 crc kubenswrapper[4687]: I0314 09:21:11.216269 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerStarted","Data":"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5"} Mar 14 09:21:11 crc kubenswrapper[4687]: I0314 09:21:11.216779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerStarted","Data":"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971"} Mar 14 09:21:11 crc kubenswrapper[4687]: I0314 09:21:11.330311 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:12 crc kubenswrapper[4687]: I0314 09:21:12.219675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:21:12 crc kubenswrapper[4687]: I0314 09:21:12.220064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:21:12 crc kubenswrapper[4687]: I0314 09:21:12.220962 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:21:12 crc kubenswrapper[4687]: E0314 09:21:12.221225 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:21:12 crc kubenswrapper[4687]: I0314 09:21:12.230747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerStarted","Data":"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070"} Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.253844 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerStarted","Data":"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0"} Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.254004 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-central-agent" containerID="cri-o://dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971" gracePeriod=30 Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.254165 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="proxy-httpd" containerID="cri-o://165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0" gracePeriod=30 Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.254125 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="sg-core" containerID="cri-o://28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070" gracePeriod=30 Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.254187 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.254230 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-notification-agent" containerID="cri-o://5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5" gracePeriod=30 Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.294936 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7451991759999999 podStartE2EDuration="5.294917861s" podCreationTimestamp="2026-03-14 09:21:09 +0000 UTC" firstStartedPulling="2026-03-14 09:21:10.045378389 +0000 UTC m=+1455.033618764" lastFinishedPulling="2026-03-14 09:21:13.595097074 +0000 UTC m=+1458.583337449" observedRunningTime="2026-03-14 09:21:14.283970592 +0000 UTC m=+1459.272210967" watchObservedRunningTime="2026-03-14 09:21:14.294917861 +0000 UTC m=+1459.283158236" Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.559653 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:14 crc kubenswrapper[4687]: I0314 09:21:14.585256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.266660 4687 generic.go:334] "Generic (PLEG): container finished" podID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerID="165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0" exitCode=0 Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.266943 4687 generic.go:334] "Generic (PLEG): container finished" podID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerID="28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070" exitCode=2 Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.266955 4687 generic.go:334] "Generic (PLEG): container finished" podID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerID="5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5" exitCode=0 Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.266738 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerDied","Data":"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0"} Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.267052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerDied","Data":"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070"} Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.267067 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerDied","Data":"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5"} Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.267247 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.298297 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:15 crc kubenswrapper[4687]: I0314 09:21:15.739672 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:21:15 crc kubenswrapper[4687]: E0314 09:21:15.739985 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.315438 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" event={"ID":"113a938c-1831-439a-ae3c-5fbf7abfbc81","Type":"ContainerStarted","Data":"f199f955345f7a9c262e01cebbf73127f4af3751df881bef1a1ad2ff1d037309"} Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.338507 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" podStartSLOduration=2.221640305 podStartE2EDuration="31.338490111s" podCreationTimestamp="2026-03-14 09:20:48 +0000 UTC" firstStartedPulling="2026-03-14 09:20:49.74017853 +0000 UTC m=+1434.728418905" lastFinishedPulling="2026-03-14 09:21:18.857028296 +0000 UTC m=+1463.845268711" observedRunningTime="2026-03-14 09:21:19.334153153 +0000 UTC m=+1464.322393528" watchObservedRunningTime="2026-03-14 09:21:19.338490111 +0000 UTC m=+1464.326730486" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.836606 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885587 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885902 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885938 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.885999 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk\") pod \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\" (UID: \"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40\") " Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.886267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.886472 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.887012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.895907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk" (OuterVolumeSpecName: "kube-api-access-wrjgk") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "kube-api-access-wrjgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.899360 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts" (OuterVolumeSpecName: "scripts") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.913221 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.987005 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.988258 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.988297 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-kube-api-access-wrjgk\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.988311 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.988323 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:19 crc kubenswrapper[4687]: I0314 09:21:19.988364 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.012485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data" (OuterVolumeSpecName: "config-data") pod "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" (UID: "24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.090515 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.328208 4687 generic.go:334] "Generic (PLEG): container finished" podID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerID="dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971" exitCode=0 Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.328261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerDied","Data":"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971"} Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.328312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40","Type":"ContainerDied","Data":"096a46f5d4c0d8f54f9fe78465745af4717857af68586d539b82ac70140caaed"} Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.328357 4687 scope.go:117] "RemoveContainer" containerID="165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.328271 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.377357 4687 scope.go:117] "RemoveContainer" containerID="28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.379430 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.393474 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.410613 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.411104 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-central-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411124 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-central-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.411148 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="proxy-httpd" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411157 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="proxy-httpd" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.411177 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-notification-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411187 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-notification-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.411201 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="sg-core" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411209 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="sg-core" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411515 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-notification-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411557 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="proxy-httpd" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411572 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="sg-core" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.411590 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" containerName="ceilometer-central-agent" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.413288 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.413768 4687 scope.go:117] "RemoveContainer" containerID="5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.420363 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.420423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.420515 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.454793 4687 scope.go:117] "RemoveContainer" containerID="dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.481531 4687 scope.go:117] "RemoveContainer" containerID="165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.482024 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0\": container with ID starting with 165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0 not found: ID does not exist" containerID="165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.482065 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0"} err="failed to get container status \"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0\": rpc error: code = NotFound desc = could not find container \"165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0\": container with ID starting with 165bc5da793a7a4b14a00717b209e42338cc0bde46015b6ee1417f747ab57bb0 not found: ID does not exist" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.482127 4687 scope.go:117] "RemoveContainer" containerID="28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.482614 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070\": container with ID starting with 28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070 not found: ID does not exist" containerID="28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.482667 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070"} err="failed to get container status \"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070\": rpc error: code = NotFound desc = could not find container \"28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070\": container with ID starting with 28e8dfedc98a3a7cdebb5146eefdee6d89911872a6d8117a8aefd55b21001070 not found: ID does not exist" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.482693 4687 scope.go:117] "RemoveContainer" containerID="5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.483717 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5\": container with ID starting with 5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5 not found: ID does not exist" containerID="5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.483883 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5"} err="failed to get container status \"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5\": rpc error: code = NotFound desc = could not find container \"5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5\": container with ID starting with 5cee36a651e18484e7ffe19c82d1a9bb49bef9d0da07976ce6d441517a7828c5 not found: ID does not exist" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.483929 4687 scope.go:117] "RemoveContainer" containerID="dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971" Mar 14 09:21:20 crc kubenswrapper[4687]: E0314 09:21:20.484660 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971\": container with ID starting with dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971 not found: ID does not exist" containerID="dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.484694 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971"} err="failed to get container status \"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971\": rpc error: code = NotFound desc = could not find container \"dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971\": container with ID starting with dd901ab1df67c1693ab5d35b9e6e51a547c3423cc6e4b58191d2f24bfcb1e971 not found: ID does not exist" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.498247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.498392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.498454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.498753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvzq\" (UniqueName: \"kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.498985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.499068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.499306 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.602785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvzq\" (UniqueName: \"kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.603160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.603175 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.606842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.608102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.609231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.610246 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.620525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvzq\" (UniqueName: \"kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq\") pod \"ceilometer-0\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " pod="openstack/ceilometer-0" Mar 14 09:21:20 crc kubenswrapper[4687]: I0314 09:21:20.742749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:21 crc kubenswrapper[4687]: I0314 09:21:21.170213 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:21 crc kubenswrapper[4687]: W0314 09:21:21.178553 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d53105_0508_4b2f_bf01_9348ac28b813.slice/crio-6a392bac3846deaf7b79519e32005f8b0d35a5738ffd01c6498cb5361377534b WatchSource:0}: Error finding container 6a392bac3846deaf7b79519e32005f8b0d35a5738ffd01c6498cb5361377534b: Status 404 returned error can't find the container with id 6a392bac3846deaf7b79519e32005f8b0d35a5738ffd01c6498cb5361377534b Mar 14 09:21:21 crc kubenswrapper[4687]: I0314 09:21:21.340259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerStarted","Data":"6a392bac3846deaf7b79519e32005f8b0d35a5738ffd01c6498cb5361377534b"} Mar 14 09:21:21 crc kubenswrapper[4687]: I0314 09:21:21.750844 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40" path="/var/lib/kubelet/pods/24a4fbb3-cd3b-4d76-ba9b-56fd072fdb40/volumes" Mar 14 09:21:22 crc kubenswrapper[4687]: I0314 09:21:22.352988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerStarted","Data":"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40"} Mar 14 09:21:22 crc kubenswrapper[4687]: I0314 09:21:22.353034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerStarted","Data":"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8"} Mar 14 09:21:23 crc kubenswrapper[4687]: I0314 09:21:23.367767 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerStarted","Data":"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1"} Mar 14 09:21:24 crc kubenswrapper[4687]: I0314 09:21:24.382120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerStarted","Data":"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865"} Mar 14 09:21:24 crc kubenswrapper[4687]: I0314 09:21:24.382439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:24 crc kubenswrapper[4687]: I0314 09:21:24.413153 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.643818742 podStartE2EDuration="4.413130766s" podCreationTimestamp="2026-03-14 09:21:20 +0000 UTC" firstStartedPulling="2026-03-14 09:21:21.180172298 +0000 UTC m=+1466.168412683" lastFinishedPulling="2026-03-14 09:21:23.949484322 +0000 UTC m=+1468.937724707" observedRunningTime="2026-03-14 09:21:24.403504408 +0000 UTC m=+1469.391744783" watchObservedRunningTime="2026-03-14 09:21:24.413130766 +0000 UTC m=+1469.401371141" Mar 14 09:21:25 crc kubenswrapper[4687]: I0314 09:21:25.745286 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:21:25 crc kubenswrapper[4687]: E0314 09:21:25.745853 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:21:29 crc kubenswrapper[4687]: I0314 09:21:29.737895 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:21:29 crc kubenswrapper[4687]: E0314 09:21:29.738592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:21:32 crc kubenswrapper[4687]: I0314 09:21:32.485630 4687 generic.go:334] "Generic (PLEG): container finished" podID="113a938c-1831-439a-ae3c-5fbf7abfbc81" containerID="f199f955345f7a9c262e01cebbf73127f4af3751df881bef1a1ad2ff1d037309" exitCode=0 Mar 14 09:21:32 crc kubenswrapper[4687]: I0314 09:21:32.485755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" event={"ID":"113a938c-1831-439a-ae3c-5fbf7abfbc81","Type":"ContainerDied","Data":"f199f955345f7a9c262e01cebbf73127f4af3751df881bef1a1ad2ff1d037309"} Mar 14 09:21:33 crc kubenswrapper[4687]: I0314 09:21:33.889961 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.086057 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts\") pod \"113a938c-1831-439a-ae3c-5fbf7abfbc81\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.086141 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfhhv\" (UniqueName: \"kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv\") pod \"113a938c-1831-439a-ae3c-5fbf7abfbc81\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.086480 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data\") pod \"113a938c-1831-439a-ae3c-5fbf7abfbc81\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.086522 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle\") pod \"113a938c-1831-439a-ae3c-5fbf7abfbc81\" (UID: \"113a938c-1831-439a-ae3c-5fbf7abfbc81\") " Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.091885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv" (OuterVolumeSpecName: "kube-api-access-nfhhv") pod "113a938c-1831-439a-ae3c-5fbf7abfbc81" (UID: "113a938c-1831-439a-ae3c-5fbf7abfbc81"). InnerVolumeSpecName "kube-api-access-nfhhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.094328 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts" (OuterVolumeSpecName: "scripts") pod "113a938c-1831-439a-ae3c-5fbf7abfbc81" (UID: "113a938c-1831-439a-ae3c-5fbf7abfbc81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.123847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data" (OuterVolumeSpecName: "config-data") pod "113a938c-1831-439a-ae3c-5fbf7abfbc81" (UID: "113a938c-1831-439a-ae3c-5fbf7abfbc81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.153445 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "113a938c-1831-439a-ae3c-5fbf7abfbc81" (UID: "113a938c-1831-439a-ae3c-5fbf7abfbc81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.188611 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfhhv\" (UniqueName: \"kubernetes.io/projected/113a938c-1831-439a-ae3c-5fbf7abfbc81-kube-api-access-nfhhv\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.188640 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.188650 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.188659 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113a938c-1831-439a-ae3c-5fbf7abfbc81-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.510501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" event={"ID":"113a938c-1831-439a-ae3c-5fbf7abfbc81","Type":"ContainerDied","Data":"bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b"} Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.510543 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf38f7dbe06fccc933f8f6109cd08b5be930703901d65f111a63129f488009b" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.510549 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8t9sq" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.742565 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:34 crc kubenswrapper[4687]: E0314 09:21:34.743083 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.743108 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.743383 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.744211 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.747593 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.747638 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zk2s9" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.754625 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.902069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.902160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27f5j\" (UniqueName: \"kubernetes.io/projected/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-kube-api-access-27f5j\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:34 crc kubenswrapper[4687]: I0314 09:21:34.902240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.004315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27f5j\" (UniqueName: \"kubernetes.io/projected/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-kube-api-access-27f5j\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.004445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.004558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.009811 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.010880 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.022290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27f5j\" (UniqueName: \"kubernetes.io/projected/84918aa9-4677-4b9d-8cf6-e4fc0ace5144-kube-api-access-27f5j\") pod \"nova-cell0-conductor-0\" (UID: \"84918aa9-4677-4b9d-8cf6-e4fc0ace5144\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.064199 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.490991 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:35 crc kubenswrapper[4687]: W0314 09:21:35.492847 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84918aa9_4677_4b9d_8cf6_e4fc0ace5144.slice/crio-76255247848e48d7b7519e905bdfec87476c94f1ff063e9dd3132ce3926182bb WatchSource:0}: Error finding container 76255247848e48d7b7519e905bdfec87476c94f1ff063e9dd3132ce3926182bb: Status 404 returned error can't find the container with id 76255247848e48d7b7519e905bdfec87476c94f1ff063e9dd3132ce3926182bb Mar 14 09:21:35 crc kubenswrapper[4687]: I0314 09:21:35.531738 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84918aa9-4677-4b9d-8cf6-e4fc0ace5144","Type":"ContainerStarted","Data":"76255247848e48d7b7519e905bdfec87476c94f1ff063e9dd3132ce3926182bb"} Mar 14 09:21:36 crc kubenswrapper[4687]: I0314 09:21:36.549720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84918aa9-4677-4b9d-8cf6-e4fc0ace5144","Type":"ContainerStarted","Data":"6b5c25e1c09e765582f8ab30063d93d8739ab177d09c9b951c33dc79845feb1b"} Mar 14 09:21:36 crc kubenswrapper[4687]: I0314 09:21:36.550043 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:36 crc kubenswrapper[4687]: I0314 09:21:36.575019 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.574995109 podStartE2EDuration="2.574995109s" podCreationTimestamp="2026-03-14 09:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:36.565064023 +0000 UTC m=+1481.553304418" watchObservedRunningTime="2026-03-14 09:21:36.574995109 +0000 UTC m=+1481.563235494" Mar 14 09:21:36 crc kubenswrapper[4687]: I0314 09:21:36.737550 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:21:36 crc kubenswrapper[4687]: E0314 09:21:36.737761 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.112882 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.618744 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wnffc"] Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.620715 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.622610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.622812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.622950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89fd2\" (UniqueName: \"kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.623039 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.623222 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.631142 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.650760 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wnffc"] Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.724756 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.724848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.724898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89fd2\" (UniqueName: \"kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.725005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.734747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.737580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.744136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.751896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89fd2\" (UniqueName: \"kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2\") pod \"nova-cell0-cell-mapping-wnffc\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.785303 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.787090 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.794079 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.813926 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.826497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtx7\" (UniqueName: \"kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.826775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.826875 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.827015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.929105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.929286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtx7\" (UniqueName: \"kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.929396 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.929437 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.930445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.938020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.938504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.949357 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.979774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtx7\" (UniqueName: \"kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7\") pod \"nova-api-0\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " pod="openstack/nova-api-0" Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.981785 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:40 crc kubenswrapper[4687]: I0314 09:21:40.995576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.022691 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.023041 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.048581 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.053728 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.060468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.070410 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.072032 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.077161 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.094764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.110491 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.136717 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.136768 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.136887 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjw2\" (UniqueName: \"kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.136956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.136993 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.137086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxth\" (UniqueName: \"kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.158391 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.160324 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.192502 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.214496 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjcr4\" (UniqueName: \"kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238534 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjw2\" (UniqueName: \"kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238638 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxth\" (UniqueName: \"kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.238812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.247996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.248846 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.249937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.250013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.257870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxth\" (UniqueName: \"kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth\") pod \"nova-scheduler-0\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.262835 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjw2\" (UniqueName: \"kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343398 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343607 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343969 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.343999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.344070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6xd\" (UniqueName: \"kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.344129 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.344181 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.344261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjcr4\" (UniqueName: \"kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.356924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.356964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.365311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjcr4\" (UniqueName: \"kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4\") pod \"nova-metadata-0\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.421444 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.440970 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446217 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6xd\" (UniqueName: \"kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.446995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.447239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.447624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.447777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.448129 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.464219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6xd\" (UniqueName: \"kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd\") pod \"dnsmasq-dns-67fb8f77c9-hr9c4\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.511647 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.513525 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.544882 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wnffc"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.810993 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:41 crc kubenswrapper[4687]: W0314 09:21:41.831020 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc9fd2c4_446c_4b88_9c93_490143b42ee6.slice/crio-99bf9b2b219844d9435d38d38f43dbc38b2330e6e8ffa06b5667f8bdf7d5792d WatchSource:0}: Error finding container 99bf9b2b219844d9435d38d38f43dbc38b2330e6e8ffa06b5667f8bdf7d5792d: Status 404 returned error can't find the container with id 99bf9b2b219844d9435d38d38f43dbc38b2330e6e8ffa06b5667f8bdf7d5792d Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.840997 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.899364 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5b8w6"] Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.900900 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.912236 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.912384 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 09:21:41 crc kubenswrapper[4687]: I0314 09:21:41.931628 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5b8w6"] Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.043665 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.070621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.070697 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.070842 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.070962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kks4r\" (UniqueName: \"kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: W0314 09:21:42.126457 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcada8742_58e1_4470_9ccb_28d8a2e09a2e.slice/crio-aebbe954b39071dc6045a5a6d7f7990799d01bd5142e0acdf21c52eda4ab9d13 WatchSource:0}: Error finding container aebbe954b39071dc6045a5a6d7f7990799d01bd5142e0acdf21c52eda4ab9d13: Status 404 returned error can't find the container with id aebbe954b39071dc6045a5a6d7f7990799d01bd5142e0acdf21c52eda4ab9d13 Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.126681 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.172948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.173035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.173072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.173113 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kks4r\" (UniqueName: \"kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.182312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.184665 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.188958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.192015 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kks4r\" (UniqueName: \"kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r\") pod \"nova-cell1-conductor-db-sync-5b8w6\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.212827 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:21:42 crc kubenswrapper[4687]: W0314 09:21:42.213222 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc040b2_2a45_497a_844b_df1ec94af4d9.slice/crio-cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0 WatchSource:0}: Error finding container cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0: Status 404 returned error can't find the container with id cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0 Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.241466 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.378209 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.695619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6080deda-4faa-40d7-9126-9c6ff985acb1","Type":"ContainerStarted","Data":"60e6d968355af32448c4b9f523263b7f5a4d45c08ce1ccf68e7738c81eb53ce6"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.720650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wnffc" event={"ID":"9465bf07-7fc2-49a3-bc32-d6958a605b98","Type":"ContainerStarted","Data":"1dd55d4bbcaf7b020c3d6ee18240e52dd4d9d90a74197a969a60516e5ad3f343"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.720708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wnffc" event={"ID":"9465bf07-7fc2-49a3-bc32-d6958a605b98","Type":"ContainerStarted","Data":"c9af4f1841e8efbef23b3ac12b582233fb55feae712168a23e5673f13bda1a5a"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.737202 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.750529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerStarted","Data":"99bf9b2b219844d9435d38d38f43dbc38b2330e6e8ffa06b5667f8bdf7d5792d"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.756999 4687 generic.go:334] "Generic (PLEG): container finished" podID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerID="1701ab5d5c42bc99f9e065a62efc087b6ae6a0317d65112bedbe57dc5adafd93" exitCode=0 Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.757066 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" event={"ID":"6cc040b2-2a45-497a-844b-df1ec94af4d9","Type":"ContainerDied","Data":"1701ab5d5c42bc99f9e065a62efc087b6ae6a0317d65112bedbe57dc5adafd93"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.757096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" event={"ID":"6cc040b2-2a45-497a-844b-df1ec94af4d9","Type":"ContainerStarted","Data":"cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.787971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cada8742-58e1-4470-9ccb-28d8a2e09a2e","Type":"ContainerStarted","Data":"aebbe954b39071dc6045a5a6d7f7990799d01bd5142e0acdf21c52eda4ab9d13"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.789119 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wnffc" podStartSLOduration=2.789109305 podStartE2EDuration="2.789109305s" podCreationTimestamp="2026-03-14 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:42.786251624 +0000 UTC m=+1487.774491999" watchObservedRunningTime="2026-03-14 09:21:42.789109305 +0000 UTC m=+1487.777349680" Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.849817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerStarted","Data":"555ee07b0a792f891a2aa75e0a6276ab945f740d1e74b15cec7eb453d2635a36"} Mar 14 09:21:42 crc kubenswrapper[4687]: I0314 09:21:42.880171 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5b8w6"] Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.861656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" event={"ID":"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f","Type":"ContainerStarted","Data":"29fd10a2f6f3e52fbba20c97885ce9e71a966c3cb04b637a92b060ab186daf33"} Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.862266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" event={"ID":"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f","Type":"ContainerStarted","Data":"51d8465aa3248e6d2185a1c92d2d1e30f019595232548c5859e72da8ec5aaeec"} Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.864782 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a"} Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.868522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" event={"ID":"6cc040b2-2a45-497a-844b-df1ec94af4d9","Type":"ContainerStarted","Data":"99ff8611f2bc5d219b7af1496417cc53462f6d93954e98a39f6db0637fc9b7a5"} Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.868571 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.892512 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" podStartSLOduration=2.892486483 podStartE2EDuration="2.892486483s" podCreationTimestamp="2026-03-14 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:43.886864024 +0000 UTC m=+1488.875104409" watchObservedRunningTime="2026-03-14 09:21:43.892486483 +0000 UTC m=+1488.880726888" Mar 14 09:21:43 crc kubenswrapper[4687]: I0314 09:21:43.942756 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" podStartSLOduration=2.942730064 podStartE2EDuration="2.942730064s" podCreationTimestamp="2026-03-14 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:43.937589258 +0000 UTC m=+1488.925829663" watchObservedRunningTime="2026-03-14 09:21:43.942730064 +0000 UTC m=+1488.930970459" Mar 14 09:21:44 crc kubenswrapper[4687]: I0314 09:21:44.708352 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:21:44 crc kubenswrapper[4687]: I0314 09:21:44.720837 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.907497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerStarted","Data":"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.908088 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerStarted","Data":"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.907693 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-metadata" containerID="cri-o://a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" gracePeriod=30 Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.907640 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-log" containerID="cri-o://8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" gracePeriod=30 Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.916202 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6080deda-4faa-40d7-9126-9c6ff985acb1","Type":"ContainerStarted","Data":"1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.922446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerStarted","Data":"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.922500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerStarted","Data":"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.927900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cada8742-58e1-4470-9ccb-28d8a2e09a2e","Type":"ContainerStarted","Data":"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5"} Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.928137 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5" gracePeriod=30 Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.936679 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.643938633 podStartE2EDuration="6.936659028s" podCreationTimestamp="2026-03-14 09:21:40 +0000 UTC" firstStartedPulling="2026-03-14 09:21:42.385264708 +0000 UTC m=+1487.373505083" lastFinishedPulling="2026-03-14 09:21:45.677985103 +0000 UTC m=+1490.666225478" observedRunningTime="2026-03-14 09:21:46.934346551 +0000 UTC m=+1491.922586916" watchObservedRunningTime="2026-03-14 09:21:46.936659028 +0000 UTC m=+1491.924899403" Mar 14 09:21:46 crc kubenswrapper[4687]: I0314 09:21:46.966086 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.127787183 podStartE2EDuration="6.966071145s" podCreationTimestamp="2026-03-14 09:21:40 +0000 UTC" firstStartedPulling="2026-03-14 09:21:41.840767077 +0000 UTC m=+1486.829007452" lastFinishedPulling="2026-03-14 09:21:45.679051039 +0000 UTC m=+1490.667291414" observedRunningTime="2026-03-14 09:21:46.954027497 +0000 UTC m=+1491.942267882" watchObservedRunningTime="2026-03-14 09:21:46.966071145 +0000 UTC m=+1491.954311520" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.014471 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.502467109 podStartE2EDuration="7.01444838s" podCreationTimestamp="2026-03-14 09:21:40 +0000 UTC" firstStartedPulling="2026-03-14 09:21:42.129483879 +0000 UTC m=+1487.117724254" lastFinishedPulling="2026-03-14 09:21:45.64146513 +0000 UTC m=+1490.629705525" observedRunningTime="2026-03-14 09:21:46.973042037 +0000 UTC m=+1491.961282422" watchObservedRunningTime="2026-03-14 09:21:47.01444838 +0000 UTC m=+1492.002688755" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.039500 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.410568927 podStartE2EDuration="7.039480678s" podCreationTimestamp="2026-03-14 09:21:40 +0000 UTC" firstStartedPulling="2026-03-14 09:21:42.055579923 +0000 UTC m=+1487.043820298" lastFinishedPulling="2026-03-14 09:21:45.684491674 +0000 UTC m=+1490.672732049" observedRunningTime="2026-03-14 09:21:46.989679227 +0000 UTC m=+1491.977919602" watchObservedRunningTime="2026-03-14 09:21:47.039480678 +0000 UTC m=+1492.027721053" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.554498 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.719197 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle\") pod \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.719292 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjcr4\" (UniqueName: \"kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4\") pod \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.719317 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data\") pod \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.719362 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs\") pod \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\" (UID: \"ec264aa6-95b2-455c-9dda-1ac79e91c3bd\") " Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.720007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs" (OuterVolumeSpecName: "logs") pod "ec264aa6-95b2-455c-9dda-1ac79e91c3bd" (UID: "ec264aa6-95b2-455c-9dda-1ac79e91c3bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.725968 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4" (OuterVolumeSpecName: "kube-api-access-zjcr4") pod "ec264aa6-95b2-455c-9dda-1ac79e91c3bd" (UID: "ec264aa6-95b2-455c-9dda-1ac79e91c3bd"). InnerVolumeSpecName "kube-api-access-zjcr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.755327 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data" (OuterVolumeSpecName: "config-data") pod "ec264aa6-95b2-455c-9dda-1ac79e91c3bd" (UID: "ec264aa6-95b2-455c-9dda-1ac79e91c3bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.755751 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec264aa6-95b2-455c-9dda-1ac79e91c3bd" (UID: "ec264aa6-95b2-455c-9dda-1ac79e91c3bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.822069 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.822323 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjcr4\" (UniqueName: \"kubernetes.io/projected/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-kube-api-access-zjcr4\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.822447 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.822532 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec264aa6-95b2-455c-9dda-1ac79e91c3bd-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.939018 4687 generic.go:334] "Generic (PLEG): container finished" podID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerID="a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" exitCode=0 Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.940974 4687 generic.go:334] "Generic (PLEG): container finished" podID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerID="8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" exitCode=143 Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.940938 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.940768 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerDied","Data":"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d"} Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.942044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerDied","Data":"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a"} Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.942129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec264aa6-95b2-455c-9dda-1ac79e91c3bd","Type":"ContainerDied","Data":"555ee07b0a792f891a2aa75e0a6276ab945f740d1e74b15cec7eb453d2635a36"} Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.942227 4687 scope.go:117] "RemoveContainer" containerID="a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.975467 4687 scope.go:117] "RemoveContainer" containerID="8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" Mar 14 09:21:47 crc kubenswrapper[4687]: I0314 09:21:47.989955 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.006104 4687 scope.go:117] "RemoveContainer" containerID="a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" Mar 14 09:21:48 crc kubenswrapper[4687]: E0314 09:21:48.006465 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d\": container with ID starting with a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d not found: ID does not exist" containerID="a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.006504 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d"} err="failed to get container status \"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d\": rpc error: code = NotFound desc = could not find container \"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d\": container with ID starting with a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d not found: ID does not exist" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.006523 4687 scope.go:117] "RemoveContainer" containerID="8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" Mar 14 09:21:48 crc kubenswrapper[4687]: E0314 09:21:48.006860 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a\": container with ID starting with 8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a not found: ID does not exist" containerID="8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.006881 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a"} err="failed to get container status \"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a\": rpc error: code = NotFound desc = could not find container \"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a\": container with ID starting with 8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a not found: ID does not exist" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.006902 4687 scope.go:117] "RemoveContainer" containerID="a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.007716 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d"} err="failed to get container status \"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d\": rpc error: code = NotFound desc = could not find container \"a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d\": container with ID starting with a8c8a1dc493d93f7b043f5c230b4eee26fb054ec7a95821141611ef7e0de488d not found: ID does not exist" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.007740 4687 scope.go:117] "RemoveContainer" containerID="8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.007994 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a"} err="failed to get container status \"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a\": rpc error: code = NotFound desc = could not find container \"8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a\": container with ID starting with 8cdc27789324385bc0e81eb298be123bba0ab92a4ff65cf1a7a5f108ac7ee08a not found: ID does not exist" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.015397 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.026235 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:48 crc kubenswrapper[4687]: E0314 09:21:48.026781 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-log" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.026807 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-log" Mar 14 09:21:48 crc kubenswrapper[4687]: E0314 09:21:48.026832 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-metadata" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.026842 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-metadata" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.027112 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-metadata" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.027158 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" containerName="nova-metadata-log" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.031427 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.038084 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.038361 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.066513 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.129617 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.129804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.129934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkzc\" (UniqueName: \"kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.129961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.129985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.232100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.232648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.232882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.233000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.233134 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzkzc\" (UniqueName: \"kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.233174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.238095 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.238374 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.244141 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.250398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzkzc\" (UniqueName: \"kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc\") pod \"nova-metadata-0\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.357005 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:48 crc kubenswrapper[4687]: W0314 09:21:48.860815 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147bfd77_4211_4c6c_a3cd_b3f71f477898.slice/crio-aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37 WatchSource:0}: Error finding container aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37: Status 404 returned error can't find the container with id aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37 Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.866818 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:48 crc kubenswrapper[4687]: I0314 09:21:48.953076 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerStarted","Data":"aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37"} Mar 14 09:21:49 crc kubenswrapper[4687]: I0314 09:21:49.760593 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec264aa6-95b2-455c-9dda-1ac79e91c3bd" path="/var/lib/kubelet/pods/ec264aa6-95b2-455c-9dda-1ac79e91c3bd/volumes" Mar 14 09:21:49 crc kubenswrapper[4687]: I0314 09:21:49.966805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerStarted","Data":"71cefd35d3e4d7ec211457cc04a7b6881c985be6198d4f1049c62889876a4bc1"} Mar 14 09:21:49 crc kubenswrapper[4687]: I0314 09:21:49.966854 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerStarted","Data":"3f53cda01fc6889f65d9dbe00f63211869062cf7ffe9f71a243691ff174a3f6a"} Mar 14 09:21:50 crc kubenswrapper[4687]: I0314 09:21:50.003586 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.003564834 podStartE2EDuration="3.003564834s" podCreationTimestamp="2026-03-14 09:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:49.998248863 +0000 UTC m=+1494.986489268" watchObservedRunningTime="2026-03-14 09:21:50.003564834 +0000 UTC m=+1494.991805239" Mar 14 09:21:50 crc kubenswrapper[4687]: I0314 09:21:50.757131 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:21:50 crc kubenswrapper[4687]: I0314 09:21:50.978485 4687 generic.go:334] "Generic (PLEG): container finished" podID="9465bf07-7fc2-49a3-bc32-d6958a605b98" containerID="1dd55d4bbcaf7b020c3d6ee18240e52dd4d9d90a74197a969a60516e5ad3f343" exitCode=0 Mar 14 09:21:50 crc kubenswrapper[4687]: I0314 09:21:50.978569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wnffc" event={"ID":"9465bf07-7fc2-49a3-bc32-d6958a605b98","Type":"ContainerDied","Data":"1dd55d4bbcaf7b020c3d6ee18240e52dd4d9d90a74197a969a60516e5ad3f343"} Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.195534 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.195597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.422458 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.441981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.442091 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.470024 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.514507 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.580520 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.580737 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="dnsmasq-dns" containerID="cri-o://35496752d966cec18e4145df003d278058bc4e65df982592129dae2e8c62569b" gracePeriod=10 Mar 14 09:21:51 crc kubenswrapper[4687]: I0314 09:21:51.737378 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.043727 4687 generic.go:334] "Generic (PLEG): container finished" podID="683995cb-d47d-415b-b471-e7c225b8f997" containerID="35496752d966cec18e4145df003d278058bc4e65df982592129dae2e8c62569b" exitCode=0 Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.045152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" event={"ID":"683995cb-d47d-415b-b471-e7c225b8f997","Type":"ContainerDied","Data":"35496752d966cec18e4145df003d278058bc4e65df982592129dae2e8c62569b"} Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.100293 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.127935 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.129005 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.215835 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.283745 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.284019 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318530 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6ls4\" (UniqueName: \"kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318642 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318773 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.318842 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb\") pod \"683995cb-d47d-415b-b471-e7c225b8f997\" (UID: \"683995cb-d47d-415b-b471-e7c225b8f997\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.332646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4" (OuterVolumeSpecName: "kube-api-access-g6ls4") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "kube-api-access-g6ls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.376249 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config" (OuterVolumeSpecName: "config") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.405905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.414877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424258 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424583 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6ls4\" (UniqueName: \"kubernetes.io/projected/683995cb-d47d-415b-b471-e7c225b8f997-kube-api-access-g6ls4\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424612 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424623 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424634 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.424641 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.438075 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "683995cb-d47d-415b-b471-e7c225b8f997" (UID: "683995cb-d47d-415b-b471-e7c225b8f997"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.448857 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.526050 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89fd2\" (UniqueName: \"kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2\") pod \"9465bf07-7fc2-49a3-bc32-d6958a605b98\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.526265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data\") pod \"9465bf07-7fc2-49a3-bc32-d6958a605b98\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.526356 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle\") pod \"9465bf07-7fc2-49a3-bc32-d6958a605b98\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.526408 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts\") pod \"9465bf07-7fc2-49a3-bc32-d6958a605b98\" (UID: \"9465bf07-7fc2-49a3-bc32-d6958a605b98\") " Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.526783 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683995cb-d47d-415b-b471-e7c225b8f997-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.530569 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts" (OuterVolumeSpecName: "scripts") pod "9465bf07-7fc2-49a3-bc32-d6958a605b98" (UID: "9465bf07-7fc2-49a3-bc32-d6958a605b98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.531537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2" (OuterVolumeSpecName: "kube-api-access-89fd2") pod "9465bf07-7fc2-49a3-bc32-d6958a605b98" (UID: "9465bf07-7fc2-49a3-bc32-d6958a605b98"). InnerVolumeSpecName "kube-api-access-89fd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.551493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9465bf07-7fc2-49a3-bc32-d6958a605b98" (UID: "9465bf07-7fc2-49a3-bc32-d6958a605b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.566444 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data" (OuterVolumeSpecName: "config-data") pod "9465bf07-7fc2-49a3-bc32-d6958a605b98" (UID: "9465bf07-7fc2-49a3-bc32-d6958a605b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.628197 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.628236 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89fd2\" (UniqueName: \"kubernetes.io/projected/9465bf07-7fc2-49a3-bc32-d6958a605b98-kube-api-access-89fd2\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.628252 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:52 crc kubenswrapper[4687]: I0314 09:21:52.628264 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9465bf07-7fc2-49a3-bc32-d6958a605b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.053996 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wnffc" event={"ID":"9465bf07-7fc2-49a3-bc32-d6958a605b98","Type":"ContainerDied","Data":"c9af4f1841e8efbef23b3ac12b582233fb55feae712168a23e5673f13bda1a5a"} Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.054309 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9af4f1841e8efbef23b3ac12b582233fb55feae712168a23e5673f13bda1a5a" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.054050 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wnffc" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.055687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" event={"ID":"683995cb-d47d-415b-b471-e7c225b8f997","Type":"ContainerDied","Data":"70cbd898a739715b8850b33d3d8dfe6670bdd15f4ebd937f9cdf48e22ee5e7d8"} Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.055732 4687 scope.go:117] "RemoveContainer" containerID="35496752d966cec18e4145df003d278058bc4e65df982592129dae2e8c62569b" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.055755 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94bb9f455-tr46j" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.066408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc"} Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.126237 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.136584 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-94bb9f455-tr46j"] Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.166962 4687 scope.go:117] "RemoveContainer" containerID="4edea96f03c9881f2cafa9956880933b71b94a19d5e5c5d3b4a02455bd4a3f72" Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.200676 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.200934 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-log" containerID="cri-o://dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38" gracePeriod=30 Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.201119 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-api" containerID="cri-o://9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5" gracePeriod=30 Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.261449 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.287792 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.288015 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-log" containerID="cri-o://3f53cda01fc6889f65d9dbe00f63211869062cf7ffe9f71a243691ff174a3f6a" gracePeriod=30 Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.288082 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-metadata" containerID="cri-o://71cefd35d3e4d7ec211457cc04a7b6881c985be6198d4f1049c62889876a4bc1" gracePeriod=30 Mar 14 09:21:53 crc kubenswrapper[4687]: I0314 09:21:53.749103 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683995cb-d47d-415b-b471-e7c225b8f997" path="/var/lib/kubelet/pods/683995cb-d47d-415b-b471-e7c225b8f997/volumes" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078176 4687 generic.go:334] "Generic (PLEG): container finished" podID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerID="71cefd35d3e4d7ec211457cc04a7b6881c985be6198d4f1049c62889876a4bc1" exitCode=0 Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078485 4687 generic.go:334] "Generic (PLEG): container finished" podID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerID="3f53cda01fc6889f65d9dbe00f63211869062cf7ffe9f71a243691ff174a3f6a" exitCode=143 Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078271 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerDied","Data":"71cefd35d3e4d7ec211457cc04a7b6881c985be6198d4f1049c62889876a4bc1"} Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerDied","Data":"3f53cda01fc6889f65d9dbe00f63211869062cf7ffe9f71a243691ff174a3f6a"} Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"147bfd77-4211-4c6c-a3cd-b3f71f477898","Type":"ContainerDied","Data":"aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37"} Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.078582 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac99b2cb7e4c2d0bf310a3b1b2b7462318ca0c4499358ce6806d67a5627ae37" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.082182 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerID="dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38" exitCode=143 Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.082259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerDied","Data":"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38"} Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.127732 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.267892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data\") pod \"147bfd77-4211-4c6c-a3cd-b3f71f477898\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.267953 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs\") pod \"147bfd77-4211-4c6c-a3cd-b3f71f477898\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.267986 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle\") pod \"147bfd77-4211-4c6c-a3cd-b3f71f477898\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.268090 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzkzc\" (UniqueName: \"kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc\") pod \"147bfd77-4211-4c6c-a3cd-b3f71f477898\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.268172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs\") pod \"147bfd77-4211-4c6c-a3cd-b3f71f477898\" (UID: \"147bfd77-4211-4c6c-a3cd-b3f71f477898\") " Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.268192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs" (OuterVolumeSpecName: "logs") pod "147bfd77-4211-4c6c-a3cd-b3f71f477898" (UID: "147bfd77-4211-4c6c-a3cd-b3f71f477898"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.269449 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/147bfd77-4211-4c6c-a3cd-b3f71f477898-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.279171 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc" (OuterVolumeSpecName: "kube-api-access-pzkzc") pod "147bfd77-4211-4c6c-a3cd-b3f71f477898" (UID: "147bfd77-4211-4c6c-a3cd-b3f71f477898"). InnerVolumeSpecName "kube-api-access-pzkzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.310783 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data" (OuterVolumeSpecName: "config-data") pod "147bfd77-4211-4c6c-a3cd-b3f71f477898" (UID: "147bfd77-4211-4c6c-a3cd-b3f71f477898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.311776 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147bfd77-4211-4c6c-a3cd-b3f71f477898" (UID: "147bfd77-4211-4c6c-a3cd-b3f71f477898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.330980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "147bfd77-4211-4c6c-a3cd-b3f71f477898" (UID: "147bfd77-4211-4c6c-a3cd-b3f71f477898"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.371176 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzkzc\" (UniqueName: \"kubernetes.io/projected/147bfd77-4211-4c6c-a3cd-b3f71f477898-kube-api-access-pzkzc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.371213 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.371226 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:54 crc kubenswrapper[4687]: I0314 09:21:54.371236 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147bfd77-4211-4c6c-a3cd-b3f71f477898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.095069 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.095173 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerName="nova-scheduler-scheduler" containerID="cri-o://1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" gracePeriod=30 Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.133478 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.141935 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.177574 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:55 crc kubenswrapper[4687]: E0314 09:21:55.178587 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="dnsmasq-dns" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.178681 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="dnsmasq-dns" Mar 14 09:21:55 crc kubenswrapper[4687]: E0314 09:21:55.178757 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="init" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.178817 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="init" Mar 14 09:21:55 crc kubenswrapper[4687]: E0314 09:21:55.178892 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9465bf07-7fc2-49a3-bc32-d6958a605b98" containerName="nova-manage" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.178953 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9465bf07-7fc2-49a3-bc32-d6958a605b98" containerName="nova-manage" Mar 14 09:21:55 crc kubenswrapper[4687]: E0314 09:21:55.179216 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-log" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.179307 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-log" Mar 14 09:21:55 crc kubenswrapper[4687]: E0314 09:21:55.179455 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-metadata" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.179538 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-metadata" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.180033 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-log" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.180138 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" containerName="nova-metadata-metadata" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.180209 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="683995cb-d47d-415b-b471-e7c225b8f997" containerName="dnsmasq-dns" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.180288 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9465bf07-7fc2-49a3-bc32-d6958a605b98" containerName="nova-manage" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.186012 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.193077 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.200020 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.211795 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.289794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.289850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.289964 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pg6z\" (UniqueName: \"kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.290003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.290076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.392304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pg6z\" (UniqueName: \"kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.392644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.392710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.392797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.392823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.393377 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.398063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.398860 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.411889 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pg6z\" (UniqueName: \"kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.412689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data\") pod \"nova-metadata-0\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.510207 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:21:55 crc kubenswrapper[4687]: I0314 09:21:55.754005 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147bfd77-4211-4c6c-a3cd-b3f71f477898" path="/var/lib/kubelet/pods/147bfd77-4211-4c6c-a3cd-b3f71f477898/volumes" Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.032576 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.032824 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="83b6dcda-2598-425a-9ec3-3ca523f94052" containerName="kube-state-metrics" containerID="cri-o://d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e" gracePeriod=30 Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.067735 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.142285 4687 generic.go:334] "Generic (PLEG): container finished" podID="d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" containerID="29fd10a2f6f3e52fbba20c97885ce9e71a966c3cb04b637a92b060ab186daf33" exitCode=0 Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.142629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" event={"ID":"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f","Type":"ContainerDied","Data":"29fd10a2f6f3e52fbba20c97885ce9e71a966c3cb04b637a92b060ab186daf33"} Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.172443 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" exitCode=1 Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.172504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a"} Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.172537 4687 scope.go:117] "RemoveContainer" containerID="f5578e7cfa4ec92b8e9083d9eeea581211ee101f57458be6da21caaf758fb2ef" Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.173364 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:21:56 crc kubenswrapper[4687]: E0314 09:21:56.173624 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:21:56 crc kubenswrapper[4687]: E0314 09:21:56.452501 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:21:56 crc kubenswrapper[4687]: E0314 09:21:56.465322 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:21:56 crc kubenswrapper[4687]: E0314 09:21:56.468045 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:21:56 crc kubenswrapper[4687]: E0314 09:21:56.468100 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerName="nova-scheduler-scheduler" Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.791525 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.930950 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qt8p\" (UniqueName: \"kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p\") pod \"83b6dcda-2598-425a-9ec3-3ca523f94052\" (UID: \"83b6dcda-2598-425a-9ec3-3ca523f94052\") " Mar 14 09:21:56 crc kubenswrapper[4687]: I0314 09:21:56.938658 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p" (OuterVolumeSpecName: "kube-api-access-8qt8p") pod "83b6dcda-2598-425a-9ec3-3ca523f94052" (UID: "83b6dcda-2598-425a-9ec3-3ca523f94052"). InnerVolumeSpecName "kube-api-access-8qt8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.033876 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qt8p\" (UniqueName: \"kubernetes.io/projected/83b6dcda-2598-425a-9ec3-3ca523f94052-kube-api-access-8qt8p\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.186846 4687 generic.go:334] "Generic (PLEG): container finished" podID="83b6dcda-2598-425a-9ec3-3ca523f94052" containerID="d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e" exitCode=2 Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.186910 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83b6dcda-2598-425a-9ec3-3ca523f94052","Type":"ContainerDied","Data":"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e"} Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.186938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83b6dcda-2598-425a-9ec3-3ca523f94052","Type":"ContainerDied","Data":"4fb0a6aacc46742454d2fad563d89aebd065747a4ec44ac026725e7025e3011e"} Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.186967 4687 scope.go:117] "RemoveContainer" containerID="d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.187055 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.194493 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerStarted","Data":"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81"} Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.194534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerStarted","Data":"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916"} Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.194545 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerStarted","Data":"6d9749ba3d3e2d7e9073f26a48d867a7fe51f7695f9cb3d6c293347ab4bc9364"} Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.211719 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.211702637 podStartE2EDuration="2.211702637s" podCreationTimestamp="2026-03-14 09:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:57.211276457 +0000 UTC m=+1502.199516832" watchObservedRunningTime="2026-03-14 09:21:57.211702637 +0000 UTC m=+1502.199943012" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.223007 4687 scope.go:117] "RemoveContainer" containerID="d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e" Mar 14 09:21:57 crc kubenswrapper[4687]: E0314 09:21:57.223647 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e\": container with ID starting with d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e not found: ID does not exist" containerID="d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.223677 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e"} err="failed to get container status \"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e\": rpc error: code = NotFound desc = could not find container \"d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e\": container with ID starting with d41989789d425dd1c68b35afd29f140cd76a69f4b7b7c5d8d5a04a57d0c79f8e not found: ID does not exist" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.249030 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.260396 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.289633 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:57 crc kubenswrapper[4687]: E0314 09:21:57.290199 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b6dcda-2598-425a-9ec3-3ca523f94052" containerName="kube-state-metrics" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.290219 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b6dcda-2598-425a-9ec3-3ca523f94052" containerName="kube-state-metrics" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.290474 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b6dcda-2598-425a-9ec3-3ca523f94052" containerName="kube-state-metrics" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.291188 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.295769 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.295927 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.299817 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.471134 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.471310 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.471446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97k46\" (UniqueName: \"kubernetes.io/projected/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-api-access-97k46\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.471651 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.574304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.574373 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.574405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97k46\" (UniqueName: \"kubernetes.io/projected/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-api-access-97k46\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.574479 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.581380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.585662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.597655 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d5720-4d32-4320-abf4-18c7a4d70e33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.609615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97k46\" (UniqueName: \"kubernetes.io/projected/4a2d5720-4d32-4320-abf4-18c7a4d70e33-kube-api-access-97k46\") pod \"kube-state-metrics-0\" (UID: \"4a2d5720-4d32-4320-abf4-18c7a4d70e33\") " pod="openstack/kube-state-metrics-0" Mar 14 09:21:57 crc kubenswrapper[4687]: I0314 09:21:57.630024 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.755972 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b6dcda-2598-425a-9ec3-3ca523f94052" path="/var/lib/kubelet/pods/83b6dcda-2598-425a-9ec3-3ca523f94052/volumes" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.765865 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.774083 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785370 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle\") pod \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785413 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data\") pod \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtx7\" (UniqueName: \"kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7\") pod \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785507 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kks4r\" (UniqueName: \"kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r\") pod \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data\") pod \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts\") pod \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs\") pod \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\" (UID: \"fc9fd2c4-446c-4b88-9c93-490143b42ee6\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.785737 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle\") pod \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\" (UID: \"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f\") " Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.792174 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r" (OuterVolumeSpecName: "kube-api-access-kks4r") pod "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" (UID: "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f"). InnerVolumeSpecName "kube-api-access-kks4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.792503 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs" (OuterVolumeSpecName: "logs") pod "fc9fd2c4-446c-4b88-9c93-490143b42ee6" (UID: "fc9fd2c4-446c-4b88-9c93-490143b42ee6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.792786 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts" (OuterVolumeSpecName: "scripts") pod "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" (UID: "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.812966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7" (OuterVolumeSpecName: "kube-api-access-6dtx7") pod "fc9fd2c4-446c-4b88-9c93-490143b42ee6" (UID: "fc9fd2c4-446c-4b88-9c93-490143b42ee6"). InnerVolumeSpecName "kube-api-access-6dtx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.829375 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data" (OuterVolumeSpecName: "config-data") pod "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" (UID: "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.838140 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc9fd2c4-446c-4b88-9c93-490143b42ee6" (UID: "fc9fd2c4-446c-4b88-9c93-490143b42ee6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.840109 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" (UID: "d05cec2e-2f40-4dec-a3cd-7f3d7f54952f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.842300 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data" (OuterVolumeSpecName: "config-data") pod "fc9fd2c4-446c-4b88-9c93-490143b42ee6" (UID: "fc9fd2c4-446c-4b88-9c93-490143b42ee6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889173 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889204 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889243 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9fd2c4-446c-4b88-9c93-490143b42ee6-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889264 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889281 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9fd2c4-446c-4b88-9c93-490143b42ee6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889351 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889363 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dtx7\" (UniqueName: \"kubernetes.io/projected/fc9fd2c4-446c-4b88-9c93-490143b42ee6-kube-api-access-6dtx7\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:57.889376 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kks4r\" (UniqueName: \"kubernetes.io/projected/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f-kube-api-access-kks4r\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.216195 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.216535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5b8w6" event={"ID":"d05cec2e-2f40-4dec-a3cd-7f3d7f54952f","Type":"ContainerDied","Data":"51d8465aa3248e6d2185a1c92d2d1e30f019595232548c5859e72da8ec5aaeec"} Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.216590 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d8465aa3248e6d2185a1c92d2d1e30f019595232548c5859e72da8ec5aaeec" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.236101 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: E0314 09:21:58.236547 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-log" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.236561 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-log" Mar 14 09:21:58 crc kubenswrapper[4687]: E0314 09:21:58.236573 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" containerName="nova-cell1-conductor-db-sync" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.236580 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" containerName="nova-cell1-conductor-db-sync" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.236862 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerID="9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5" exitCode=0 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.236960 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: E0314 09:21:58.238825 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-api" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.238843 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-api" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.239118 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-log" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.239134 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" containerName="nova-api-api" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.239150 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" containerName="nova-cell1-conductor-db-sync" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.240022 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerDied","Data":"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5"} Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.240049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc9fd2c4-446c-4b88-9c93-490143b42ee6","Type":"ContainerDied","Data":"99bf9b2b219844d9435d38d38f43dbc38b2330e6e8ffa06b5667f8bdf7d5792d"} Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.240067 4687 scope.go:117] "RemoveContainer" containerID="9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.240245 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.251393 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.253234 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.257216 4687 generic.go:334] "Generic (PLEG): container finished" podID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerID="1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" exitCode=0 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.257314 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6080deda-4faa-40d7-9126-9c6ff985acb1","Type":"ContainerDied","Data":"1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2"} Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.288522 4687 scope.go:117] "RemoveContainer" containerID="dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.294867 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.295357 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.295501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.295569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljh6\" (UniqueName: \"kubernetes.io/projected/f7269d07-db77-4395-92c2-642b7237ae80-kube-api-access-8ljh6\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.309344 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.319491 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.320637 4687 scope.go:117] "RemoveContainer" containerID="9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5" Mar 14 09:21:58 crc kubenswrapper[4687]: E0314 09:21:58.321067 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5\": container with ID starting with 9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5 not found: ID does not exist" containerID="9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.321093 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5"} err="failed to get container status \"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5\": rpc error: code = NotFound desc = could not find container \"9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5\": container with ID starting with 9327734c2dc5d2ee4ed786b9390e95774026b438336ab9790858ea89a523bdc5 not found: ID does not exist" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.321112 4687 scope.go:117] "RemoveContainer" containerID="dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.321443 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: E0314 09:21:58.321484 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38\": container with ID starting with dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38 not found: ID does not exist" containerID="dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.321526 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38"} err="failed to get container status \"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38\": rpc error: code = NotFound desc = could not find container \"dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38\": container with ID starting with dd0f23fa7c35f720be29606d3db655cda588294a82b0724bc7ede70b30c11a38 not found: ID does not exist" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.325785 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.331871 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.377698 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.378015 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-central-agent" containerID="cri-o://883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8" gracePeriod=30 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.378054 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="proxy-httpd" containerID="cri-o://ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865" gracePeriod=30 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.378096 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="sg-core" containerID="cri-o://ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1" gracePeriod=30 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.378104 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-notification-agent" containerID="cri-o://04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40" gracePeriod=30 Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.397809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.397937 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.397972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.398005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.398043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rch\" (UniqueName: \"kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.398086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.398138 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljh6\" (UniqueName: \"kubernetes.io/projected/f7269d07-db77-4395-92c2-642b7237ae80-kube-api-access-8ljh6\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.404002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.406129 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7269d07-db77-4395-92c2-642b7237ae80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.416947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljh6\" (UniqueName: \"kubernetes.io/projected/f7269d07-db77-4395-92c2-642b7237ae80-kube-api-access-8ljh6\") pod \"nova-cell1-conductor-0\" (UID: \"f7269d07-db77-4395-92c2-642b7237ae80\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.499849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.500006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.500053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.500093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rch\" (UniqueName: \"kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.500317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.503263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.503982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.516698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rch\" (UniqueName: \"kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch\") pod \"nova-api-0\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.572839 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.643950 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:21:58 crc kubenswrapper[4687]: I0314 09:21:58.934402 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.012769 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnxth\" (UniqueName: \"kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth\") pod \"6080deda-4faa-40d7-9126-9c6ff985acb1\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.012871 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle\") pod \"6080deda-4faa-40d7-9126-9c6ff985acb1\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.013107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data\") pod \"6080deda-4faa-40d7-9126-9c6ff985acb1\" (UID: \"6080deda-4faa-40d7-9126-9c6ff985acb1\") " Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.026253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth" (OuterVolumeSpecName: "kube-api-access-dnxth") pod "6080deda-4faa-40d7-9126-9c6ff985acb1" (UID: "6080deda-4faa-40d7-9126-9c6ff985acb1"). InnerVolumeSpecName "kube-api-access-dnxth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.052970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6080deda-4faa-40d7-9126-9c6ff985acb1" (UID: "6080deda-4faa-40d7-9126-9c6ff985acb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.063425 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data" (OuterVolumeSpecName: "config-data") pod "6080deda-4faa-40d7-9126-9c6ff985acb1" (UID: "6080deda-4faa-40d7-9126-9c6ff985acb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.064564 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.120841 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.120881 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnxth\" (UniqueName: \"kubernetes.io/projected/6080deda-4faa-40d7-9126-9c6ff985acb1-kube-api-access-dnxth\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.120896 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6080deda-4faa-40d7-9126-9c6ff985acb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.199862 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.320296 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7269d07-db77-4395-92c2-642b7237ae80","Type":"ContainerStarted","Data":"011ea34eac09f2e51360bfde82448a373843c6a2b78cfd7e5c35b2acc1790be6"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324818 4687 generic.go:334] "Generic (PLEG): container finished" podID="94d53105-0508-4b2f-bf01-9348ac28b813" containerID="ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865" exitCode=0 Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324852 4687 generic.go:334] "Generic (PLEG): container finished" podID="94d53105-0508-4b2f-bf01-9348ac28b813" containerID="ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1" exitCode=2 Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324861 4687 generic.go:334] "Generic (PLEG): container finished" podID="94d53105-0508-4b2f-bf01-9348ac28b813" containerID="883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8" exitCode=0 Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324915 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerDied","Data":"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerDied","Data":"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.324953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerDied","Data":"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.334694 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a2d5720-4d32-4320-abf4-18c7a4d70e33","Type":"ContainerStarted","Data":"9fa3d09f572a50df0f56ef57a3b3e56c9ef7bb443a0c1ab3acc1f8528d95edfb"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.338051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6080deda-4faa-40d7-9126-9c6ff985acb1","Type":"ContainerDied","Data":"60e6d968355af32448c4b9f523263b7f5a4d45c08ce1ccf68e7738c81eb53ce6"} Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.338125 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.338165 4687 scope.go:117] "RemoveContainer" containerID="1f202591c64b2117ca8b867e8b9c5de83e8fe1d392c797bacf1c376dec3159c2" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.340351 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.423727 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.439161 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.456233 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: E0314 09:21:59.456962 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerName="nova-scheduler-scheduler" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.457054 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerName="nova-scheduler-scheduler" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.457394 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" containerName="nova-scheduler-scheduler" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.458401 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.460523 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.474156 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.530026 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.530067 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn75g\" (UniqueName: \"kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.530111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.632362 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.632803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.632827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn75g\" (UniqueName: \"kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.639458 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.640375 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.654220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn75g\" (UniqueName: \"kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g\") pod \"nova-scheduler-0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " pod="openstack/nova-scheduler-0" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.748109 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6080deda-4faa-40d7-9126-9c6ff985acb1" path="/var/lib/kubelet/pods/6080deda-4faa-40d7-9126-9c6ff985acb1/volumes" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.749019 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9fd2c4-446c-4b88-9c93-490143b42ee6" path="/var/lib/kubelet/pods/fc9fd2c4-446c-4b88-9c93-490143b42ee6/volumes" Mar 14 09:21:59 crc kubenswrapper[4687]: I0314 09:21:59.864580 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.133772 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558002-8zs9t"] Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.135258 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.141849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswqp\" (UniqueName: \"kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp\") pod \"auto-csr-approver-29558002-8zs9t\" (UID: \"d9e440bf-b163-4125-bfe0-4d2d32cfb47d\") " pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.145018 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.145153 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.145224 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.165601 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-8zs9t"] Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.244477 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswqp\" (UniqueName: \"kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp\") pod \"auto-csr-approver-29558002-8zs9t\" (UID: \"d9e440bf-b163-4125-bfe0-4d2d32cfb47d\") " pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.261974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswqp\" (UniqueName: \"kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp\") pod \"auto-csr-approver-29558002-8zs9t\" (UID: \"d9e440bf-b163-4125-bfe0-4d2d32cfb47d\") " pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.334302 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.372674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0","Type":"ContainerStarted","Data":"997ca580257829665e596a999356620d15c4cac295e79a21ea4af12ea5a94900"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.384190 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7269d07-db77-4395-92c2-642b7237ae80","Type":"ContainerStarted","Data":"ffda9a03ec20f93414478836a068a8315572a9aa3c417373d34db703c3474db7"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.384265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.388682 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a2d5720-4d32-4320-abf4-18c7a4d70e33","Type":"ContainerStarted","Data":"21fb47a4854a01e6a47b76f56d8452272dd8818364f568190f5a59b3c32df295"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.389013 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.394222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerStarted","Data":"6de4d73a0ada8d7b9602efe894e2d6696c5f40720b33a67e78a74efa1955ac09"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.394285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerStarted","Data":"75d3c4b2b6ce2366dc7ff6f834aecf833259b6c339999ac05cceb9439823939a"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.394299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerStarted","Data":"703d0c322b7887efed94c21a1a7e438660901b2f1d3958756e0295f0c3fad165"} Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.422618 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.42259875 podStartE2EDuration="2.42259875s" podCreationTimestamp="2026-03-14 09:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:00.420845017 +0000 UTC m=+1505.409085392" watchObservedRunningTime="2026-03-14 09:22:00.42259875 +0000 UTC m=+1505.410839125" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.447099 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.977534705 podStartE2EDuration="3.447072905s" podCreationTimestamp="2026-03-14 09:21:57 +0000 UTC" firstStartedPulling="2026-03-14 09:21:59.069676687 +0000 UTC m=+1504.057917062" lastFinishedPulling="2026-03-14 09:21:59.539214887 +0000 UTC m=+1504.527455262" observedRunningTime="2026-03-14 09:22:00.435003646 +0000 UTC m=+1505.423244021" watchObservedRunningTime="2026-03-14 09:22:00.447072905 +0000 UTC m=+1505.435313290" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.464446 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.472236 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.472211536 podStartE2EDuration="2.472211536s" podCreationTimestamp="2026-03-14 09:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:00.456805936 +0000 UTC m=+1505.445046321" watchObservedRunningTime="2026-03-14 09:22:00.472211536 +0000 UTC m=+1505.460451911" Mar 14 09:22:00 crc kubenswrapper[4687]: I0314 09:22:00.948747 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-8zs9t"] Mar 14 09:22:01 crc kubenswrapper[4687]: I0314 09:22:01.405597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0","Type":"ContainerStarted","Data":"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626"} Mar 14 09:22:01 crc kubenswrapper[4687]: I0314 09:22:01.408643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" event={"ID":"d9e440bf-b163-4125-bfe0-4d2d32cfb47d","Type":"ContainerStarted","Data":"8a546031402e73f42d6b25a03d76dda48b805df706ac144acdb67b3ece5e4006"} Mar 14 09:22:01 crc kubenswrapper[4687]: I0314 09:22:01.438707 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.438682812 podStartE2EDuration="2.438682812s" podCreationTimestamp="2026-03-14 09:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:01.426522692 +0000 UTC m=+1506.414763077" watchObservedRunningTime="2026-03-14 09:22:01.438682812 +0000 UTC m=+1506.426923197" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.127606 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.128799 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:02 crc kubenswrapper[4687]: E0314 09:22:02.129169 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.129423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.220466 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.220535 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.221933 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.169:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.169:8443: connect: connection refused" Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.420777 4687 generic.go:334] "Generic (PLEG): container finished" podID="d9e440bf-b163-4125-bfe0-4d2d32cfb47d" containerID="fcc334b7a37a2c6bf9fd3ac83e82dd13aec88e772f2f063c59c351eccb014648" exitCode=0 Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.421979 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" event={"ID":"d9e440bf-b163-4125-bfe0-4d2d32cfb47d","Type":"ContainerDied","Data":"fcc334b7a37a2c6bf9fd3ac83e82dd13aec88e772f2f063c59c351eccb014648"} Mar 14 09:22:02 crc kubenswrapper[4687]: I0314 09:22:02.422512 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:02 crc kubenswrapper[4687]: E0314 09:22:02.422733 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.444455 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" exitCode=1 Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.444497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc"} Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.444758 4687 scope.go:117] "RemoveContainer" containerID="221b73e73647e888ac639b1478cc596447f1030ead7da38998ecaff4ef9017f6" Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.445717 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:03 crc kubenswrapper[4687]: E0314 09:22:03.446028 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.846076 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.937387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswqp\" (UniqueName: \"kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp\") pod \"d9e440bf-b163-4125-bfe0-4d2d32cfb47d\" (UID: \"d9e440bf-b163-4125-bfe0-4d2d32cfb47d\") " Mar 14 09:22:03 crc kubenswrapper[4687]: I0314 09:22:03.943971 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp" (OuterVolumeSpecName: "kube-api-access-mswqp") pod "d9e440bf-b163-4125-bfe0-4d2d32cfb47d" (UID: "d9e440bf-b163-4125-bfe0-4d2d32cfb47d"). InnerVolumeSpecName "kube-api-access-mswqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.040731 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswqp\" (UniqueName: \"kubernetes.io/projected/d9e440bf-b163-4125-bfe0-4d2d32cfb47d-kube-api-access-mswqp\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.042910 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvzq\" (UniqueName: \"kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142155 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142423 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.142450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd\") pod \"94d53105-0508-4b2f-bf01-9348ac28b813\" (UID: \"94d53105-0508-4b2f-bf01-9348ac28b813\") " Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.143419 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.143708 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.146863 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts" (OuterVolumeSpecName: "scripts") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.148201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq" (OuterVolumeSpecName: "kube-api-access-ddvzq") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "kube-api-access-ddvzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.171744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.231722 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245390 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245425 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245434 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d53105-0508-4b2f-bf01-9348ac28b813-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245445 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvzq\" (UniqueName: \"kubernetes.io/projected/94d53105-0508-4b2f-bf01-9348ac28b813-kube-api-access-ddvzq\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245453 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.245461 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.263790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data" (OuterVolumeSpecName: "config-data") pod "94d53105-0508-4b2f-bf01-9348ac28b813" (UID: "94d53105-0508-4b2f-bf01-9348ac28b813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.347471 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d53105-0508-4b2f-bf01-9348ac28b813-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.457402 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.457401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-8zs9t" event={"ID":"d9e440bf-b163-4125-bfe0-4d2d32cfb47d","Type":"ContainerDied","Data":"8a546031402e73f42d6b25a03d76dda48b805df706ac144acdb67b3ece5e4006"} Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.457792 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a546031402e73f42d6b25a03d76dda48b805df706ac144acdb67b3ece5e4006" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.461004 4687 generic.go:334] "Generic (PLEG): container finished" podID="94d53105-0508-4b2f-bf01-9348ac28b813" containerID="04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40" exitCode=0 Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.461035 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerDied","Data":"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40"} Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.461053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d53105-0508-4b2f-bf01-9348ac28b813","Type":"ContainerDied","Data":"6a392bac3846deaf7b79519e32005f8b0d35a5738ffd01c6498cb5361377534b"} Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.461071 4687 scope.go:117] "RemoveContainer" containerID="ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.461156 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.532286 4687 scope.go:117] "RemoveContainer" containerID="ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.557636 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.565878 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.588390 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.588895 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-notification-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.588915 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-notification-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.588932 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e440bf-b163-4125-bfe0-4d2d32cfb47d" containerName="oc" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.588940 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e440bf-b163-4125-bfe0-4d2d32cfb47d" containerName="oc" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.588958 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-central-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.588966 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-central-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.588985 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="sg-core" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.588992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="sg-core" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.589024 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="proxy-httpd" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589032 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="proxy-httpd" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589261 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="proxy-httpd" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589278 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e440bf-b163-4125-bfe0-4d2d32cfb47d" containerName="oc" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589299 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-central-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589319 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="ceilometer-notification-agent" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.589362 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" containerName="sg-core" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.591529 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.593152 4687 scope.go:117] "RemoveContainer" containerID="04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.594226 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.595309 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.595550 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.600585 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.680691 4687 scope.go:117] "RemoveContainer" containerID="883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.704366 4687 scope.go:117] "RemoveContainer" containerID="ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.704868 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865\": container with ID starting with ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865 not found: ID does not exist" containerID="ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.704901 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865"} err="failed to get container status \"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865\": rpc error: code = NotFound desc = could not find container \"ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865\": container with ID starting with ee4f2258bb1a7cc973b65ef1aa5507a09ca1674c0a3ed100b799535ed7b08865 not found: ID does not exist" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.704922 4687 scope.go:117] "RemoveContainer" containerID="ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.705310 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1\": container with ID starting with ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1 not found: ID does not exist" containerID="ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.705356 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1"} err="failed to get container status \"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1\": rpc error: code = NotFound desc = could not find container \"ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1\": container with ID starting with ad02c4701acaf3f038e871e0b947882d0085fd822597d43062e93e4f9c69d2b1 not found: ID does not exist" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.705376 4687 scope.go:117] "RemoveContainer" containerID="04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.705690 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40\": container with ID starting with 04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40 not found: ID does not exist" containerID="04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.705710 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40"} err="failed to get container status \"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40\": rpc error: code = NotFound desc = could not find container \"04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40\": container with ID starting with 04c551d4a0498dddc3f48db9b52d2c9b4dcf551e5d687e1fb005a4df0807af40 not found: ID does not exist" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.705725 4687 scope.go:117] "RemoveContainer" containerID="883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8" Mar 14 09:22:04 crc kubenswrapper[4687]: E0314 09:22:04.706053 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8\": container with ID starting with 883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8 not found: ID does not exist" containerID="883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.706100 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8"} err="failed to get container status \"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8\": rpc error: code = NotFound desc = could not find container \"883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8\": container with ID starting with 883cc6b75d009ac8ff790660ddde81e5d23666945b2f84ac7aa26d220ab002b8 not found: ID does not exist" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757608 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757736 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.757847 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqjlk\" (UniqueName: \"kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859568 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859681 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859801 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.859931 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqjlk\" (UniqueName: \"kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.860579 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.860817 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.864448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.864690 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.864907 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.876422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.876926 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.885506 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.887301 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqjlk\" (UniqueName: \"kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk\") pod \"ceilometer-0\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.956033 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-x72zh"] Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.972491 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:04 crc kubenswrapper[4687]: I0314 09:22:04.979395 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-x72zh"] Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.438679 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.473764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerStarted","Data":"a6d876c4433dee873c7af2067f82d0b8b990d1e40f9330866f2d9c5be2b0f652"} Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.511329 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.511389 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.756520 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63" path="/var/lib/kubelet/pods/435ba5a2-2b47-42e7-aa8f-0a4ce9cc0d63/volumes" Mar 14 09:22:05 crc kubenswrapper[4687]: I0314 09:22:05.757423 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d53105-0508-4b2f-bf01-9348ac28b813" path="/var/lib/kubelet/pods/94d53105-0508-4b2f-bf01-9348ac28b813/volumes" Mar 14 09:22:06 crc kubenswrapper[4687]: I0314 09:22:06.486972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerStarted","Data":"6a1330f5472a710e20daee2633ccf69d0812472cee686c19b0dbaba516bf7417"} Mar 14 09:22:06 crc kubenswrapper[4687]: I0314 09:22:06.487270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerStarted","Data":"38949197e4bafe548ae2f1cc0c5074e6de3719d6195932b49467bb317b13eaf1"} Mar 14 09:22:06 crc kubenswrapper[4687]: I0314 09:22:06.540557 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:06 crc kubenswrapper[4687]: I0314 09:22:06.540556 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:07 crc kubenswrapper[4687]: I0314 09:22:07.499685 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerStarted","Data":"44bd6f12b5048e846aa1b6eb6c7829020ce79d600fddf7755153b44e39dfd5e7"} Mar 14 09:22:07 crc kubenswrapper[4687]: I0314 09:22:07.639824 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.511899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerStarted","Data":"5df6eb4d659a68b159e3ac0ae541e0040e08eef1a6045322b21f05a1936eb295"} Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.512422 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.546812 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.875792269 podStartE2EDuration="4.546793556s" podCreationTimestamp="2026-03-14 09:22:04 +0000 UTC" firstStartedPulling="2026-03-14 09:22:05.44149438 +0000 UTC m=+1510.429734755" lastFinishedPulling="2026-03-14 09:22:08.112495657 +0000 UTC m=+1513.100736042" observedRunningTime="2026-03-14 09:22:08.540573052 +0000 UTC m=+1513.528813427" watchObservedRunningTime="2026-03-14 09:22:08.546793556 +0000 UTC m=+1513.535033931" Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.609664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.644311 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:08 crc kubenswrapper[4687]: I0314 09:22:08.644383 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:09 crc kubenswrapper[4687]: I0314 09:22:09.726573 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:09 crc kubenswrapper[4687]: I0314 09:22:09.726573 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:09 crc kubenswrapper[4687]: I0314 09:22:09.865137 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:22:09 crc kubenswrapper[4687]: I0314 09:22:09.901415 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:22:10 crc kubenswrapper[4687]: I0314 09:22:10.573464 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:22:12 crc kubenswrapper[4687]: I0314 09:22:12.220231 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:22:12 crc kubenswrapper[4687]: I0314 09:22:12.220365 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:22:12 crc kubenswrapper[4687]: I0314 09:22:12.221175 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:12 crc kubenswrapper[4687]: E0314 09:22:12.221463 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:13 crc kubenswrapper[4687]: I0314 09:22:13.510615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:13 crc kubenswrapper[4687]: I0314 09:22:13.510916 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:15 crc kubenswrapper[4687]: I0314 09:22:15.517454 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:22:15 crc kubenswrapper[4687]: I0314 09:22:15.522619 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:22:15 crc kubenswrapper[4687]: I0314 09:22:15.525892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:22:15 crc kubenswrapper[4687]: I0314 09:22:15.590383 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:22:15 crc kubenswrapper[4687]: I0314 09:22:15.757979 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:15 crc kubenswrapper[4687]: E0314 09:22:15.758192 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:16 crc kubenswrapper[4687]: I0314 09:22:16.644885 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:22:16 crc kubenswrapper[4687]: I0314 09:22:16.645488 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:22:16 crc kubenswrapper[4687]: W0314 09:22:16.980408 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9e440bf_b163_4125_bfe0_4d2d32cfb47d.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9e440bf_b163_4125_bfe0_4d2d32cfb47d.slice: no such file or directory Mar 14 09:22:17 crc kubenswrapper[4687]: E0314 09:22:17.221032 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcada8742_58e1_4470_9ccb_28d8a2e09a2e.slice/crio-conmon-9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcada8742_58e1_4470_9ccb_28d8a2e09a2e.slice/crio-9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.409189 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.555838 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjw2\" (UniqueName: \"kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2\") pod \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.555969 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data\") pod \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.556013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle\") pod \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\" (UID: \"cada8742-58e1-4470-9ccb-28d8a2e09a2e\") " Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.564124 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2" (OuterVolumeSpecName: "kube-api-access-kbjw2") pod "cada8742-58e1-4470-9ccb-28d8a2e09a2e" (UID: "cada8742-58e1-4470-9ccb-28d8a2e09a2e"). InnerVolumeSpecName "kube-api-access-kbjw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.588449 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data" (OuterVolumeSpecName: "config-data") pod "cada8742-58e1-4470-9ccb-28d8a2e09a2e" (UID: "cada8742-58e1-4470-9ccb-28d8a2e09a2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.589390 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cada8742-58e1-4470-9ccb-28d8a2e09a2e" (UID: "cada8742-58e1-4470-9ccb-28d8a2e09a2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.612808 4687 generic.go:334] "Generic (PLEG): container finished" podID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" containerID="9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5" exitCode=137 Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.614492 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.614522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cada8742-58e1-4470-9ccb-28d8a2e09a2e","Type":"ContainerDied","Data":"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5"} Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.614627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cada8742-58e1-4470-9ccb-28d8a2e09a2e","Type":"ContainerDied","Data":"aebbe954b39071dc6045a5a6d7f7990799d01bd5142e0acdf21c52eda4ab9d13"} Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.614655 4687 scope.go:117] "RemoveContainer" containerID="9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.658020 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjw2\" (UniqueName: \"kubernetes.io/projected/cada8742-58e1-4470-9ccb-28d8a2e09a2e-kube-api-access-kbjw2\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.658053 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.658065 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada8742-58e1-4470-9ccb-28d8a2e09a2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.677544 4687 scope.go:117] "RemoveContainer" containerID="9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5" Mar 14 09:22:17 crc kubenswrapper[4687]: E0314 09:22:17.679194 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5\": container with ID starting with 9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5 not found: ID does not exist" containerID="9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.679231 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5"} err="failed to get container status \"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5\": rpc error: code = NotFound desc = could not find container \"9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5\": container with ID starting with 9fdd6da45bdcb34e62c7e3c3568e99a72fa84eb05f87b530bf94d8c4ee1889b5 not found: ID does not exist" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.688307 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.710697 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.719787 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:17 crc kubenswrapper[4687]: E0314 09:22:17.720320 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.720362 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.720600 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.721450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.723813 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.724070 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.724237 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.731735 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.757873 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cada8742-58e1-4470-9ccb-28d8a2e09a2e" path="/var/lib/kubelet/pods/cada8742-58e1-4470-9ccb-28d8a2e09a2e/volumes" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.862057 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.862400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xm4\" (UniqueName: \"kubernetes.io/projected/3455a184-abb4-4642-888d-b5c9ba36b999-kube-api-access-z5xm4\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.862718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.862899 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.863089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.965104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.965218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.965282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.965362 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xm4\" (UniqueName: \"kubernetes.io/projected/3455a184-abb4-4642-888d-b5c9ba36b999-kube-api-access-z5xm4\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.965425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.970990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.971013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.971700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.973893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3455a184-abb4-4642-888d-b5c9ba36b999-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:17 crc kubenswrapper[4687]: I0314 09:22:17.986525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xm4\" (UniqueName: \"kubernetes.io/projected/3455a184-abb4-4642-888d-b5c9ba36b999-kube-api-access-z5xm4\") pod \"nova-cell1-novncproxy-0\" (UID: \"3455a184-abb4-4642-888d-b5c9ba36b999\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.042026 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.494928 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.625054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3455a184-abb4-4642-888d-b5c9ba36b999","Type":"ContainerStarted","Data":"c164d552b4890a7418ba2809b56edd797e48fb65b903899f2832bf6d4ecf06ca"} Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.654215 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.654299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.660195 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.665903 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.899760 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85bc458d95-5dpkr"] Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.907785 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.911961 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bc458d95-5dpkr"] Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdqs\" (UniqueName: \"kubernetes.io/projected/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-kube-api-access-gjdqs\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-svc\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-swift-storage-0\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-nb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-config\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:18 crc kubenswrapper[4687]: I0314 09:22:18.995320 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-sb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097177 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-svc\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097223 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-swift-storage-0\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-nb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097310 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-config\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097376 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-sb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.097438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdqs\" (UniqueName: \"kubernetes.io/projected/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-kube-api-access-gjdqs\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.098672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-sb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.098762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-ovsdbserver-nb\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.098792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-svc\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.098880 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-config\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.099047 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-dns-swift-storage-0\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.115148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdqs\" (UniqueName: \"kubernetes.io/projected/7f4b5e55-374f-464c-b584-89cd9ea5a2d6-kube-api-access-gjdqs\") pod \"dnsmasq-dns-85bc458d95-5dpkr\" (UID: \"7f4b5e55-374f-464c-b584-89cd9ea5a2d6\") " pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.230552 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.673834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3455a184-abb4-4642-888d-b5c9ba36b999","Type":"ContainerStarted","Data":"9e9e81e96363d4b7f6e21512885dd705c77a8fd9a9ed0bb7a8fc152293362078"} Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.699637 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.69961664 podStartE2EDuration="2.69961664s" podCreationTimestamp="2026-03-14 09:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:19.689233524 +0000 UTC m=+1524.677473919" watchObservedRunningTime="2026-03-14 09:22:19.69961664 +0000 UTC m=+1524.687857015" Mar 14 09:22:19 crc kubenswrapper[4687]: I0314 09:22:19.782854 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bc458d95-5dpkr"] Mar 14 09:22:19 crc kubenswrapper[4687]: W0314 09:22:19.789108 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f4b5e55_374f_464c_b584_89cd9ea5a2d6.slice/crio-31f75cb7b9da34a7365d666231515f8dfd5e5158881846ab1ec34d3e9a99d638 WatchSource:0}: Error finding container 31f75cb7b9da34a7365d666231515f8dfd5e5158881846ab1ec34d3e9a99d638: Status 404 returned error can't find the container with id 31f75cb7b9da34a7365d666231515f8dfd5e5158881846ab1ec34d3e9a99d638 Mar 14 09:22:20 crc kubenswrapper[4687]: I0314 09:22:20.687486 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f4b5e55-374f-464c-b584-89cd9ea5a2d6" containerID="8cc7a3fe977eeec77b227b8883def034cd19ca7b2cdb6e9fe2bbfac5f022e7e7" exitCode=0 Mar 14 09:22:20 crc kubenswrapper[4687]: I0314 09:22:20.687611 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" event={"ID":"7f4b5e55-374f-464c-b584-89cd9ea5a2d6","Type":"ContainerDied","Data":"8cc7a3fe977eeec77b227b8883def034cd19ca7b2cdb6e9fe2bbfac5f022e7e7"} Mar 14 09:22:20 crc kubenswrapper[4687]: I0314 09:22:20.687911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" event={"ID":"7f4b5e55-374f-464c-b584-89cd9ea5a2d6","Type":"ContainerStarted","Data":"31f75cb7b9da34a7365d666231515f8dfd5e5158881846ab1ec34d3e9a99d638"} Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.505314 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.551629 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.552055 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-central-agent" containerID="cri-o://38949197e4bafe548ae2f1cc0c5074e6de3719d6195932b49467bb317b13eaf1" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.552584 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="proxy-httpd" containerID="cri-o://5df6eb4d659a68b159e3ac0ae541e0040e08eef1a6045322b21f05a1936eb295" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.552665 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="sg-core" containerID="cri-o://44bd6f12b5048e846aa1b6eb6c7829020ce79d600fddf7755153b44e39dfd5e7" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.552718 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-notification-agent" containerID="cri-o://6a1330f5472a710e20daee2633ccf69d0812472cee686c19b0dbaba516bf7417" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.658998 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.230:3000/\": read tcp 10.217.0.2:57734->10.217.0.230:3000: read: connection reset by peer" Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.700695 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerID="44bd6f12b5048e846aa1b6eb6c7829020ce79d600fddf7755153b44e39dfd5e7" exitCode=2 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.700770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerDied","Data":"44bd6f12b5048e846aa1b6eb6c7829020ce79d600fddf7755153b44e39dfd5e7"} Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.702734 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" event={"ID":"7f4b5e55-374f-464c-b584-89cd9ea5a2d6","Type":"ContainerStarted","Data":"f7e769fa434a09f06dac930ee3816e67a51bfe64add6e190f14977edf6871ee0"} Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.702878 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-log" containerID="cri-o://75d3c4b2b6ce2366dc7ff6f834aecf833259b6c339999ac05cceb9439823939a" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.702948 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-api" containerID="cri-o://6de4d73a0ada8d7b9602efe894e2d6696c5f40720b33a67e78a74efa1955ac09" gracePeriod=30 Mar 14 09:22:21 crc kubenswrapper[4687]: I0314 09:22:21.745880 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" podStartSLOduration=3.745861392 podStartE2EDuration="3.745861392s" podCreationTimestamp="2026-03-14 09:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:21.728191485 +0000 UTC m=+1526.716431870" watchObservedRunningTime="2026-03-14 09:22:21.745861392 +0000 UTC m=+1526.734101757" Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.716509 4687 generic.go:334] "Generic (PLEG): container finished" podID="26bed80f-8787-4226-bd93-3071553cb803" containerID="6de4d73a0ada8d7b9602efe894e2d6696c5f40720b33a67e78a74efa1955ac09" exitCode=0 Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.717429 4687 generic.go:334] "Generic (PLEG): container finished" podID="26bed80f-8787-4226-bd93-3071553cb803" containerID="75d3c4b2b6ce2366dc7ff6f834aecf833259b6c339999ac05cceb9439823939a" exitCode=143 Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.717459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerDied","Data":"6de4d73a0ada8d7b9602efe894e2d6696c5f40720b33a67e78a74efa1955ac09"} Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.717656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerDied","Data":"75d3c4b2b6ce2366dc7ff6f834aecf833259b6c339999ac05cceb9439823939a"} Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.724630 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerID="5df6eb4d659a68b159e3ac0ae541e0040e08eef1a6045322b21f05a1936eb295" exitCode=0 Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.724674 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerID="38949197e4bafe548ae2f1cc0c5074e6de3719d6195932b49467bb317b13eaf1" exitCode=0 Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.726085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerDied","Data":"5df6eb4d659a68b159e3ac0ae541e0040e08eef1a6045322b21f05a1936eb295"} Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.726120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerDied","Data":"38949197e4bafe548ae2f1cc0c5074e6de3719d6195932b49467bb317b13eaf1"} Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.726160 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:22 crc kubenswrapper[4687]: I0314 09:22:22.739105 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:22 crc kubenswrapper[4687]: E0314 09:22:22.739366 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.042965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.055040 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181025 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs\") pod \"26bed80f-8787-4226-bd93-3071553cb803\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181122 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data\") pod \"26bed80f-8787-4226-bd93-3071553cb803\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle\") pod \"26bed80f-8787-4226-bd93-3071553cb803\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5rch\" (UniqueName: \"kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch\") pod \"26bed80f-8787-4226-bd93-3071553cb803\" (UID: \"26bed80f-8787-4226-bd93-3071553cb803\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs" (OuterVolumeSpecName: "logs") pod "26bed80f-8787-4226-bd93-3071553cb803" (UID: "26bed80f-8787-4226-bd93-3071553cb803"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.181909 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26bed80f-8787-4226-bd93-3071553cb803-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.186201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch" (OuterVolumeSpecName: "kube-api-access-r5rch") pod "26bed80f-8787-4226-bd93-3071553cb803" (UID: "26bed80f-8787-4226-bd93-3071553cb803"). InnerVolumeSpecName "kube-api-access-r5rch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.210045 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26bed80f-8787-4226-bd93-3071553cb803" (UID: "26bed80f-8787-4226-bd93-3071553cb803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.232853 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data" (OuterVolumeSpecName: "config-data") pod "26bed80f-8787-4226-bd93-3071553cb803" (UID: "26bed80f-8787-4226-bd93-3071553cb803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.283971 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.284016 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bed80f-8787-4226-bd93-3071553cb803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.284031 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5rch\" (UniqueName: \"kubernetes.io/projected/26bed80f-8787-4226-bd93-3071553cb803-kube-api-access-r5rch\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.746968 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.749759 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerID="6a1330f5472a710e20daee2633ccf69d0812472cee686c19b0dbaba516bf7417" exitCode=0 Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.750777 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26bed80f-8787-4226-bd93-3071553cb803","Type":"ContainerDied","Data":"703d0c322b7887efed94c21a1a7e438660901b2f1d3958756e0295f0c3fad165"} Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.750828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerDied","Data":"6a1330f5472a710e20daee2633ccf69d0812472cee686c19b0dbaba516bf7417"} Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.750856 4687 scope.go:117] "RemoveContainer" containerID="6de4d73a0ada8d7b9602efe894e2d6696c5f40720b33a67e78a74efa1955ac09" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.766397 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.778633 4687 scope.go:117] "RemoveContainer" containerID="75d3c4b2b6ce2366dc7ff6f834aecf833259b6c339999ac05cceb9439823939a" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.833836 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.843569 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.861595 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862040 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-api" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862057 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-api" Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862087 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-notification-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862094 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-notification-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862104 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="proxy-httpd" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862109 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="proxy-httpd" Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862125 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-log" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862135 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-log" Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862148 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-central-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862155 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-central-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: E0314 09:22:23.862165 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="sg-core" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862170 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="sg-core" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862367 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-api" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862393 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="proxy-httpd" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862401 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-notification-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862415 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="ceilometer-central-agent" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862424 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bed80f-8787-4226-bd93-3071553cb803" containerName="nova-api-log" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.862432 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" containerName="sg-core" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.863503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.868145 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.869118 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.884377 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.886731 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933299 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933758 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933949 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.933976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqjlk\" (UniqueName: \"kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934026 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs\") pod \"c0195526-ced6-423e-8eb9-5e9a05643c1a\" (UID: \"c0195526-ced6-423e-8eb9-5e9a05643c1a\") " Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934936 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrsw\" (UniqueName: \"kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.934968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.935047 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.935090 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0195526-ced6-423e-8eb9-5e9a05643c1a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.940046 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts" (OuterVolumeSpecName: "scripts") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.940568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk" (OuterVolumeSpecName: "kube-api-access-kqjlk") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "kube-api-access-kqjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.963574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:23 crc kubenswrapper[4687]: I0314 09:22:23.992878 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.015186 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.036813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbrsw\" (UniqueName: \"kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.036879 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.036943 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.036971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037221 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037239 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037251 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqjlk\" (UniqueName: \"kubernetes.io/projected/c0195526-ced6-423e-8eb9-5e9a05643c1a-kube-api-access-kqjlk\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037263 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.037273 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.039461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data" (OuterVolumeSpecName: "config-data") pod "c0195526-ced6-423e-8eb9-5e9a05643c1a" (UID: "c0195526-ced6-423e-8eb9-5e9a05643c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.043699 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.044682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.047000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.047778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.050836 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.058431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbrsw\" (UniqueName: \"kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw\") pod \"nova-api-0\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.138062 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0195526-ced6-423e-8eb9-5e9a05643c1a-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.184699 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.619938 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:24 crc kubenswrapper[4687]: W0314 09:22:24.623834 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9040f6b_436a_40fb_bd16_ffc95a4b60ac.slice/crio-bfe91eb96a0236550341bd21ff96a1841e82d5c893ed06423f2bdef0c7d89326 WatchSource:0}: Error finding container bfe91eb96a0236550341bd21ff96a1841e82d5c893ed06423f2bdef0c7d89326: Status 404 returned error can't find the container with id bfe91eb96a0236550341bd21ff96a1841e82d5c893ed06423f2bdef0c7d89326 Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.798715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerStarted","Data":"2dda7e8ea06378293e9701b3ac95ce45f713beda7797cbafe53d2c2cf0d01934"} Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.798809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerStarted","Data":"bfe91eb96a0236550341bd21ff96a1841e82d5c893ed06423f2bdef0c7d89326"} Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.802692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0195526-ced6-423e-8eb9-5e9a05643c1a","Type":"ContainerDied","Data":"a6d876c4433dee873c7af2067f82d0b8b990d1e40f9330866f2d9c5be2b0f652"} Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.802850 4687 scope.go:117] "RemoveContainer" containerID="5df6eb4d659a68b159e3ac0ae541e0040e08eef1a6045322b21f05a1936eb295" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.802746 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.828955 4687 scope.go:117] "RemoveContainer" containerID="44bd6f12b5048e846aa1b6eb6c7829020ce79d600fddf7755153b44e39dfd5e7" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.837391 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.855789 4687 scope.go:117] "RemoveContainer" containerID="6a1330f5472a710e20daee2633ccf69d0812472cee686c19b0dbaba516bf7417" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.857394 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.873389 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.876083 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.878704 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.878885 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.878999 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.881913 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:24 crc kubenswrapper[4687]: I0314 09:22:24.898594 4687 scope.go:117] "RemoveContainer" containerID="38949197e4bafe548ae2f1cc0c5074e6de3719d6195932b49467bb317b13eaf1" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.064750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-run-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dc4b\" (UniqueName: \"kubernetes.io/projected/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-kube-api-access-9dc4b\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-log-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-config-data\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.065309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-scripts\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-run-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dc4b\" (UniqueName: \"kubernetes.io/projected/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-kube-api-access-9dc4b\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-log-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-config-data\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.166745 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-scripts\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.167094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-run-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.167197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-log-httpd\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.172211 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.172642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.175303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-config-data\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.175834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.178187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-scripts\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.183964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dc4b\" (UniqueName: \"kubernetes.io/projected/1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef-kube-api-access-9dc4b\") pod \"ceilometer-0\" (UID: \"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef\") " pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.193737 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.632213 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:22:25 crc kubenswrapper[4687]: W0314 09:22:25.642905 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e84c843_c7ab_4f8a_82fc_ffe1464cc1ef.slice/crio-b7f5068ce0d18d93cf2a024c0e4b02d1b7eb489af8922603b0998efb7a92a1a5 WatchSource:0}: Error finding container b7f5068ce0d18d93cf2a024c0e4b02d1b7eb489af8922603b0998efb7a92a1a5: Status 404 returned error can't find the container with id b7f5068ce0d18d93cf2a024c0e4b02d1b7eb489af8922603b0998efb7a92a1a5 Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.709872 4687 scope.go:117] "RemoveContainer" containerID="9ae11f6ee760c95a361da8dcd523641a543895d7a1ae68a851436f83e8a0a0e8" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.758807 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bed80f-8787-4226-bd93-3071553cb803" path="/var/lib/kubelet/pods/26bed80f-8787-4226-bd93-3071553cb803/volumes" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.760234 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0195526-ced6-423e-8eb9-5e9a05643c1a" path="/var/lib/kubelet/pods/c0195526-ced6-423e-8eb9-5e9a05643c1a/volumes" Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.821265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerStarted","Data":"4f63f0f52685c728229558541057fbcae1e620da8ae1745ae0d00f742b06c7f7"} Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.829001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef","Type":"ContainerStarted","Data":"b7f5068ce0d18d93cf2a024c0e4b02d1b7eb489af8922603b0998efb7a92a1a5"} Mar 14 09:22:25 crc kubenswrapper[4687]: I0314 09:22:25.846992 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.846974348 podStartE2EDuration="2.846974348s" podCreationTimestamp="2026-03-14 09:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:25.841776999 +0000 UTC m=+1530.830017374" watchObservedRunningTime="2026-03-14 09:22:25.846974348 +0000 UTC m=+1530.835214723" Mar 14 09:22:26 crc kubenswrapper[4687]: I0314 09:22:26.840829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef","Type":"ContainerStarted","Data":"819faf50c08a62723d0389d197f5b8258050d1a021022601524b6fcbf45712ea"} Mar 14 09:22:26 crc kubenswrapper[4687]: I0314 09:22:26.841176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef","Type":"ContainerStarted","Data":"964afec65d508d928cbfb97af8f1879d1aba7bc043c44326a82624e1690ffff0"} Mar 14 09:22:27 crc kubenswrapper[4687]: I0314 09:22:27.852767 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef","Type":"ContainerStarted","Data":"f0e54b977ba341767b6e6771f015db22fc442f3a40b6117700b928c796ad77a6"} Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.043303 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.061467 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.867774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef","Type":"ContainerStarted","Data":"ed77395b116f6f682fa93a60810056eae07d6ec144ae56eb7ed906cba52e0485"} Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.868416 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.891408 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:28 crc kubenswrapper[4687]: I0314 09:22:28.899035 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9726705230000001 podStartE2EDuration="4.899013017s" podCreationTimestamp="2026-03-14 09:22:24 +0000 UTC" firstStartedPulling="2026-03-14 09:22:25.645781397 +0000 UTC m=+1530.634021772" lastFinishedPulling="2026-03-14 09:22:28.572123891 +0000 UTC m=+1533.560364266" observedRunningTime="2026-03-14 09:22:28.887012 +0000 UTC m=+1533.875252375" watchObservedRunningTime="2026-03-14 09:22:28.899013017 +0000 UTC m=+1533.887253382" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.039967 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5qg67"] Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.041471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.046090 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.047712 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.054876 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qg67"] Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.062013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hcjg\" (UniqueName: \"kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.062314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.062661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.062707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.164425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.164519 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.164574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hcjg\" (UniqueName: \"kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.164610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.170165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.170694 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.171093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.182728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hcjg\" (UniqueName: \"kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg\") pod \"nova-cell1-cell-mapping-5qg67\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.231514 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85bc458d95-5dpkr" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.309253 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.309889 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="dnsmasq-dns" containerID="cri-o://99ff8611f2bc5d219b7af1496417cc53462f6d93954e98a39f6db0637fc9b7a5" gracePeriod=10 Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.363855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.739841 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:29 crc kubenswrapper[4687]: E0314 09:22:29.740246 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.818849 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qg67"] Mar 14 09:22:29 crc kubenswrapper[4687]: W0314 09:22:29.822595 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c77e04c_9666_4578_bc3f_8d91d72ae5d0.slice/crio-a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a WatchSource:0}: Error finding container a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a: Status 404 returned error can't find the container with id a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.919171 4687 generic.go:334] "Generic (PLEG): container finished" podID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerID="99ff8611f2bc5d219b7af1496417cc53462f6d93954e98a39f6db0637fc9b7a5" exitCode=0 Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.919594 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" event={"ID":"6cc040b2-2a45-497a-844b-df1ec94af4d9","Type":"ContainerDied","Data":"99ff8611f2bc5d219b7af1496417cc53462f6d93954e98a39f6db0637fc9b7a5"} Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.920783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" event={"ID":"6cc040b2-2a45-497a-844b-df1ec94af4d9","Type":"ContainerDied","Data":"cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0"} Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.920844 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe43b883d89c783b79cb285ccda37b53656d63b34290a01d2ac611098a229f0" Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.939497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qg67" event={"ID":"3c77e04c-9666-4578-bc3f-8d91d72ae5d0","Type":"ContainerStarted","Data":"a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a"} Mar 14 09:22:29 crc kubenswrapper[4687]: I0314 09:22:29.968953 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090056 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr6xd\" (UniqueName: \"kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090358 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090500 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.090519 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config\") pod \"6cc040b2-2a45-497a-844b-df1ec94af4d9\" (UID: \"6cc040b2-2a45-497a-844b-df1ec94af4d9\") " Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.114239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd" (OuterVolumeSpecName: "kube-api-access-kr6xd") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "kube-api-access-kr6xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.192876 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr6xd\" (UniqueName: \"kubernetes.io/projected/6cc040b2-2a45-497a-844b-df1ec94af4d9-kube-api-access-kr6xd\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.236495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.237816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.245297 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.252586 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config" (OuterVolumeSpecName: "config") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.259899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6cc040b2-2a45-497a-844b-df1ec94af4d9" (UID: "6cc040b2-2a45-497a-844b-df1ec94af4d9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.294534 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.295021 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.295045 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.295066 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.295082 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc040b2-2a45-497a-844b-df1ec94af4d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.951620 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8f77c9-hr9c4" Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.955449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qg67" event={"ID":"3c77e04c-9666-4578-bc3f-8d91d72ae5d0","Type":"ContainerStarted","Data":"e3f5d9cef3f8e241906763fc010a75bf2b6ff84393fc24c9878ac8e3ec288a7c"} Mar 14 09:22:30 crc kubenswrapper[4687]: I0314 09:22:30.979234 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5qg67" podStartSLOduration=1.979212086 podStartE2EDuration="1.979212086s" podCreationTimestamp="2026-03-14 09:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:30.975640759 +0000 UTC m=+1535.963881144" watchObservedRunningTime="2026-03-14 09:22:30.979212086 +0000 UTC m=+1535.967452461" Mar 14 09:22:31 crc kubenswrapper[4687]: I0314 09:22:31.002411 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:22:31 crc kubenswrapper[4687]: I0314 09:22:31.012511 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fb8f77c9-hr9c4"] Mar 14 09:22:31 crc kubenswrapper[4687]: I0314 09:22:31.750472 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" path="/var/lib/kubelet/pods/6cc040b2-2a45-497a-844b-df1ec94af4d9/volumes" Mar 14 09:22:33 crc kubenswrapper[4687]: I0314 09:22:33.737588 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:33 crc kubenswrapper[4687]: E0314 09:22:33.737902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:34 crc kubenswrapper[4687]: I0314 09:22:34.185499 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:34 crc kubenswrapper[4687]: I0314 09:22:34.185827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:35 crc kubenswrapper[4687]: I0314 09:22:35.198528 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:35 crc kubenswrapper[4687]: I0314 09:22:35.198792 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:36 crc kubenswrapper[4687]: I0314 09:22:36.010031 4687 generic.go:334] "Generic (PLEG): container finished" podID="3c77e04c-9666-4578-bc3f-8d91d72ae5d0" containerID="e3f5d9cef3f8e241906763fc010a75bf2b6ff84393fc24c9878ac8e3ec288a7c" exitCode=0 Mar 14 09:22:36 crc kubenswrapper[4687]: I0314 09:22:36.010126 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qg67" event={"ID":"3c77e04c-9666-4578-bc3f-8d91d72ae5d0","Type":"ContainerDied","Data":"e3f5d9cef3f8e241906763fc010a75bf2b6ff84393fc24c9878ac8e3ec288a7c"} Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.396871 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.562603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts\") pod \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.562752 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hcjg\" (UniqueName: \"kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg\") pod \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.562780 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle\") pod \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.562863 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data\") pod \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\" (UID: \"3c77e04c-9666-4578-bc3f-8d91d72ae5d0\") " Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.567959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts" (OuterVolumeSpecName: "scripts") pod "3c77e04c-9666-4578-bc3f-8d91d72ae5d0" (UID: "3c77e04c-9666-4578-bc3f-8d91d72ae5d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.568506 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg" (OuterVolumeSpecName: "kube-api-access-2hcjg") pod "3c77e04c-9666-4578-bc3f-8d91d72ae5d0" (UID: "3c77e04c-9666-4578-bc3f-8d91d72ae5d0"). InnerVolumeSpecName "kube-api-access-2hcjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.596062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c77e04c-9666-4578-bc3f-8d91d72ae5d0" (UID: "3c77e04c-9666-4578-bc3f-8d91d72ae5d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.602974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data" (OuterVolumeSpecName: "config-data") pod "3c77e04c-9666-4578-bc3f-8d91d72ae5d0" (UID: "3c77e04c-9666-4578-bc3f-8d91d72ae5d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.665230 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.665583 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hcjg\" (UniqueName: \"kubernetes.io/projected/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-kube-api-access-2hcjg\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.665607 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:37 crc kubenswrapper[4687]: I0314 09:22:37.665619 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77e04c-9666-4578-bc3f-8d91d72ae5d0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.035974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qg67" event={"ID":"3c77e04c-9666-4578-bc3f-8d91d72ae5d0","Type":"ContainerDied","Data":"a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a"} Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.036014 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4db531d752b3e063b6c80396b31b6c5e6050f5f14cfb60ec6dd447e1f80c86a" Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.036095 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qg67" Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.206610 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.206845 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-log" containerID="cri-o://2dda7e8ea06378293e9701b3ac95ce45f713beda7797cbafe53d2c2cf0d01934" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.206950 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-api" containerID="cri-o://4f63f0f52685c728229558541057fbcae1e620da8ae1745ae0d00f742b06c7f7" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.239923 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.240172 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerName="nova-scheduler-scheduler" containerID="cri-o://5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.250780 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.251066 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-log" containerID="cri-o://714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4687]: I0314 09:22:38.251162 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-metadata" containerID="cri-o://9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81" gracePeriod=30 Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.047739 4687 generic.go:334] "Generic (PLEG): container finished" podID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerID="714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916" exitCode=143 Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.047818 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerDied","Data":"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916"} Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.050459 4687 generic.go:334] "Generic (PLEG): container finished" podID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerID="2dda7e8ea06378293e9701b3ac95ce45f713beda7797cbafe53d2c2cf0d01934" exitCode=143 Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.050488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerDied","Data":"2dda7e8ea06378293e9701b3ac95ce45f713beda7797cbafe53d2c2cf0d01934"} Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.564251 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.712911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pg6z\" (UniqueName: \"kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z\") pod \"17e141f8-36d4-4f4a-9867-a39af83f994b\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.713014 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs\") pod \"17e141f8-36d4-4f4a-9867-a39af83f994b\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.713044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs\") pod \"17e141f8-36d4-4f4a-9867-a39af83f994b\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.713182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle\") pod \"17e141f8-36d4-4f4a-9867-a39af83f994b\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.713249 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data\") pod \"17e141f8-36d4-4f4a-9867-a39af83f994b\" (UID: \"17e141f8-36d4-4f4a-9867-a39af83f994b\") " Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.713718 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs" (OuterVolumeSpecName: "logs") pod "17e141f8-36d4-4f4a-9867-a39af83f994b" (UID: "17e141f8-36d4-4f4a-9867-a39af83f994b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.729861 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z" (OuterVolumeSpecName: "kube-api-access-5pg6z") pod "17e141f8-36d4-4f4a-9867-a39af83f994b" (UID: "17e141f8-36d4-4f4a-9867-a39af83f994b"). InnerVolumeSpecName "kube-api-access-5pg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.752289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e141f8-36d4-4f4a-9867-a39af83f994b" (UID: "17e141f8-36d4-4f4a-9867-a39af83f994b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.780159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data" (OuterVolumeSpecName: "config-data") pod "17e141f8-36d4-4f4a-9867-a39af83f994b" (UID: "17e141f8-36d4-4f4a-9867-a39af83f994b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.815996 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pg6z\" (UniqueName: \"kubernetes.io/projected/17e141f8-36d4-4f4a-9867-a39af83f994b-kube-api-access-5pg6z\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.816032 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e141f8-36d4-4f4a-9867-a39af83f994b-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.816042 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.816052 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.821964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "17e141f8-36d4-4f4a-9867-a39af83f994b" (UID: "17e141f8-36d4-4f4a-9867-a39af83f994b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:39 crc kubenswrapper[4687]: E0314 09:22:39.865548 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 is running failed: container process not found" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:39 crc kubenswrapper[4687]: E0314 09:22:39.865970 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 is running failed: container process not found" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:39 crc kubenswrapper[4687]: E0314 09:22:39.866546 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 is running failed: container process not found" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:39 crc kubenswrapper[4687]: E0314 09:22:39.866620 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerName="nova-scheduler-scheduler" Mar 14 09:22:39 crc kubenswrapper[4687]: I0314 09:22:39.917715 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e141f8-36d4-4f4a-9867-a39af83f994b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.055661 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.066512 4687 generic.go:334] "Generic (PLEG): container finished" podID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerID="4f63f0f52685c728229558541057fbcae1e620da8ae1745ae0d00f742b06c7f7" exitCode=0 Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.066544 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerDied","Data":"4f63f0f52685c728229558541057fbcae1e620da8ae1745ae0d00f742b06c7f7"} Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.089109 4687 generic.go:334] "Generic (PLEG): container finished" podID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" exitCode=0 Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.089305 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.089265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0","Type":"ContainerDied","Data":"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626"} Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.090053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0","Type":"ContainerDied","Data":"997ca580257829665e596a999356620d15c4cac295e79a21ea4af12ea5a94900"} Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.090074 4687 scope.go:117] "RemoveContainer" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.095448 4687 generic.go:334] "Generic (PLEG): container finished" podID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerID="9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81" exitCode=0 Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.095495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerDied","Data":"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81"} Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.095517 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.095527 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17e141f8-36d4-4f4a-9867-a39af83f994b","Type":"ContainerDied","Data":"6d9749ba3d3e2d7e9073f26a48d867a7fe51f7695f9cb3d6c293347ab4bc9364"} Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.132503 4687 scope.go:117] "RemoveContainer" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.133635 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626\": container with ID starting with 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 not found: ID does not exist" containerID="5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.133687 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626"} err="failed to get container status \"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626\": rpc error: code = NotFound desc = could not find container \"5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626\": container with ID starting with 5fddda12ab4835a042d944b4ed44066b490319fe0e3f2c1a4311354a57e7d626 not found: ID does not exist" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.133721 4687 scope.go:117] "RemoveContainer" containerID="9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.150455 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.172714 4687 scope.go:117] "RemoveContainer" containerID="714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.180188 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.200522 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201019 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="init" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201042 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="init" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201059 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="dnsmasq-dns" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201065 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="dnsmasq-dns" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201086 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-log" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201092 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-log" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201118 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerName="nova-scheduler-scheduler" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201125 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerName="nova-scheduler-scheduler" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201140 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-metadata" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201151 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-metadata" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.201163 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c77e04c-9666-4578-bc3f-8d91d72ae5d0" containerName="nova-manage" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201171 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c77e04c-9666-4578-bc3f-8d91d72ae5d0" containerName="nova-manage" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201623 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-log" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201682 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc040b2-2a45-497a-844b-df1ec94af4d9" containerName="dnsmasq-dns" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201702 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" containerName="nova-scheduler-scheduler" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201760 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c77e04c-9666-4578-bc3f-8d91d72ae5d0" containerName="nova-manage" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.201773 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" containerName="nova-metadata-metadata" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.203470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.206803 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.206976 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.224473 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.225392 4687 scope.go:117] "RemoveContainer" containerID="9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.229637 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81\": container with ID starting with 9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81 not found: ID does not exist" containerID="9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.229674 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81"} err="failed to get container status \"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81\": rpc error: code = NotFound desc = could not find container \"9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81\": container with ID starting with 9e6d07769c83cf9be89f97aa93cdd73526d0bc9d36173be39288fa608fe24f81 not found: ID does not exist" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.229697 4687 scope.go:117] "RemoveContainer" containerID="714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.230116 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916\": container with ID starting with 714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916 not found: ID does not exist" containerID="714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.230139 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916"} err="failed to get container status \"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916\": rpc error: code = NotFound desc = could not find container \"714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916\": container with ID starting with 714d7f57ec51a1a9339fc35ed564afa0e06694fc824d313fe79acdcaeeee8916 not found: ID does not exist" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.232170 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle\") pod \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.232397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn75g\" (UniqueName: \"kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g\") pod \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.232490 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data\") pod \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\" (UID: \"8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.239571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g" (OuterVolumeSpecName: "kube-api-access-zn75g") pod "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" (UID: "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0"). InnerVolumeSpecName "kube-api-access-zn75g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.280776 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data" (OuterVolumeSpecName: "config-data") pod "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" (UID: "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.285702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" (UID: "8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.306953 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f62246-5351-422e-8e28-fe9926c7dd39-logs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-config-data\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbtq\" (UniqueName: \"kubernetes.io/projected/96f62246-5351-422e-8e28-fe9926c7dd39-kube-api-access-wlbtq\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336376 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn75g\" (UniqueName: \"kubernetes.io/projected/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-kube-api-access-zn75g\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336389 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.336398 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.429431 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438070 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438175 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438435 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438487 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438519 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbrsw\" (UniqueName: \"kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw\") pod \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\" (UID: \"a9040f6b-436a-40fb-bd16-ffc95a4b60ac\") " Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbtq\" (UniqueName: \"kubernetes.io/projected/96f62246-5351-422e-8e28-fe9926c7dd39-kube-api-access-wlbtq\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.438961 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs" (OuterVolumeSpecName: "logs") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.439133 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f62246-5351-422e-8e28-fe9926c7dd39-logs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.439231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-config-data\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.439455 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.440831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f62246-5351-422e-8e28-fe9926c7dd39-logs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.445431 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.446263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-config-data\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.451986 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.454774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw" (OuterVolumeSpecName: "kube-api-access-mbrsw") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "kube-api-access-mbrsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.459774 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.460237 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-log" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.460260 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-log" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.460288 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-api" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.460297 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-api" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.460537 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-api" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.460573 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" containerName="nova-api-log" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.461530 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.461868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbtq\" (UniqueName: \"kubernetes.io/projected/96f62246-5351-422e-8e28-fe9926c7dd39-kube-api-access-wlbtq\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.464667 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.465547 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f62246-5351-422e-8e28-fe9926c7dd39-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f62246-5351-422e-8e28-fe9926c7dd39\") " pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.469860 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.478844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data" (OuterVolumeSpecName: "config-data") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.487978 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.504039 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.515902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a9040f6b-436a-40fb-bd16-ffc95a4b60ac" (UID: "a9040f6b-436a-40fb-bd16-ffc95a4b60ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.529027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.545169 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.545198 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbrsw\" (UniqueName: \"kubernetes.io/projected/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-kube-api-access-mbrsw\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.545212 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.545222 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.545234 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9040f6b-436a-40fb-bd16-ffc95a4b60ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.648401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhcd\" (UniqueName: \"kubernetes.io/projected/3a5150eb-8e4a-4162-b122-602614d01773-kube-api-access-hqhcd\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.648580 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-config-data\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.648615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.737033 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:40 crc kubenswrapper[4687]: E0314 09:22:40.737610 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.750803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqhcd\" (UniqueName: \"kubernetes.io/projected/3a5150eb-8e4a-4162-b122-602614d01773-kube-api-access-hqhcd\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.751007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-config-data\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.751058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.756077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.756100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5150eb-8e4a-4162-b122-602614d01773-config-data\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.774003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqhcd\" (UniqueName: \"kubernetes.io/projected/3a5150eb-8e4a-4162-b122-602614d01773-kube-api-access-hqhcd\") pod \"nova-scheduler-0\" (UID: \"3a5150eb-8e4a-4162-b122-602614d01773\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.790702 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4687]: I0314 09:22:40.991194 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.109454 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.109454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9040f6b-436a-40fb-bd16-ffc95a4b60ac","Type":"ContainerDied","Data":"bfe91eb96a0236550341bd21ff96a1841e82d5c893ed06423f2bdef0c7d89326"} Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.109573 4687 scope.go:117] "RemoveContainer" containerID="4f63f0f52685c728229558541057fbcae1e620da8ae1745ae0d00f742b06c7f7" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.118212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f62246-5351-422e-8e28-fe9926c7dd39","Type":"ContainerStarted","Data":"710a46c3ef55a681eac82614d638a4b43712f20f70fb59e9a78fdd3fc3cc2299"} Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.152231 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.160245 4687 scope.go:117] "RemoveContainer" containerID="2dda7e8ea06378293e9701b3ac95ce45f713beda7797cbafe53d2c2cf0d01934" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.193369 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.203499 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.205269 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.208920 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.209111 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.209245 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.216490 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.252312 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsrt\" (UniqueName: \"kubernetes.io/projected/47f5a5c0-8928-4d3e-9098-a3338401c52e-kube-api-access-zwsrt\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364395 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f5a5c0-8928-4d3e-9098-a3338401c52e-logs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-config-data\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-public-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364471 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.364497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.465975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsrt\" (UniqueName: \"kubernetes.io/projected/47f5a5c0-8928-4d3e-9098-a3338401c52e-kube-api-access-zwsrt\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f5a5c0-8928-4d3e-9098-a3338401c52e-logs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-config-data\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466075 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-public-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466123 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.466663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f5a5c0-8928-4d3e-9098-a3338401c52e-logs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.470444 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.472019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-public-tls-certs\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.474134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-config-data\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.479989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f5a5c0-8928-4d3e-9098-a3338401c52e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.493957 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsrt\" (UniqueName: \"kubernetes.io/projected/47f5a5c0-8928-4d3e-9098-a3338401c52e-kube-api-access-zwsrt\") pod \"nova-api-0\" (UID: \"47f5a5c0-8928-4d3e-9098-a3338401c52e\") " pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.536727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.755901 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e141f8-36d4-4f4a-9867-a39af83f994b" path="/var/lib/kubelet/pods/17e141f8-36d4-4f4a-9867-a39af83f994b/volumes" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.756993 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0" path="/var/lib/kubelet/pods/8fac7eeb-48c5-4c36-b363-cc4c6b63c7d0/volumes" Mar 14 09:22:41 crc kubenswrapper[4687]: I0314 09:22:41.757846 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9040f6b-436a-40fb-bd16-ffc95a4b60ac" path="/var/lib/kubelet/pods/a9040f6b-436a-40fb-bd16-ffc95a4b60ac/volumes" Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.024548 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:42 crc kubenswrapper[4687]: W0314 09:22:42.028342 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47f5a5c0_8928_4d3e_9098_a3338401c52e.slice/crio-0168c8c269a73fa8f4086b135980c7107ead56625b0e0cfa4d84e1bd14d77d8c WatchSource:0}: Error finding container 0168c8c269a73fa8f4086b135980c7107ead56625b0e0cfa4d84e1bd14d77d8c: Status 404 returned error can't find the container with id 0168c8c269a73fa8f4086b135980c7107ead56625b0e0cfa4d84e1bd14d77d8c Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.131109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47f5a5c0-8928-4d3e-9098-a3338401c52e","Type":"ContainerStarted","Data":"0168c8c269a73fa8f4086b135980c7107ead56625b0e0cfa4d84e1bd14d77d8c"} Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.132735 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f62246-5351-422e-8e28-fe9926c7dd39","Type":"ContainerStarted","Data":"663aa796d0a4ec055b8e684439c1e85c6fb71d86245640216d88f90f30dfa1b7"} Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.132758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f62246-5351-422e-8e28-fe9926c7dd39","Type":"ContainerStarted","Data":"f20397f59b7edb59fee583acf57f3f2893c02f432bf75b1cc2fb999ffea77814"} Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.134173 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a5150eb-8e4a-4162-b122-602614d01773","Type":"ContainerStarted","Data":"b33f4a608f64d8eaa4207c221f57de0aefd4eb6187c9175d70cd4bcb0aa95ffc"} Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.134192 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a5150eb-8e4a-4162-b122-602614d01773","Type":"ContainerStarted","Data":"daa87b77fc05a6f3e4bbe382ab42bb1ecf0f22d970cdf7435519ab55f4740fdc"} Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.166649 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.166627846 podStartE2EDuration="2.166627846s" podCreationTimestamp="2026-03-14 09:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:42.156823284 +0000 UTC m=+1547.145063659" watchObservedRunningTime="2026-03-14 09:22:42.166627846 +0000 UTC m=+1547.154868221" Mar 14 09:22:42 crc kubenswrapper[4687]: I0314 09:22:42.181043 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.181025142 podStartE2EDuration="2.181025142s" podCreationTimestamp="2026-03-14 09:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:42.174556111 +0000 UTC m=+1547.162796496" watchObservedRunningTime="2026-03-14 09:22:42.181025142 +0000 UTC m=+1547.169265517" Mar 14 09:22:43 crc kubenswrapper[4687]: I0314 09:22:43.144762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47f5a5c0-8928-4d3e-9098-a3338401c52e","Type":"ContainerStarted","Data":"13c0f747d2f685636ed88ac5b17b095cd97a6088833589840c1821fc0a10a219"} Mar 14 09:22:43 crc kubenswrapper[4687]: I0314 09:22:43.145091 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47f5a5c0-8928-4d3e-9098-a3338401c52e","Type":"ContainerStarted","Data":"5c98421abc12359073a475cdd9f363865d653afd1e6fe4f00966d93291cc2281"} Mar 14 09:22:43 crc kubenswrapper[4687]: I0314 09:22:43.167410 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.167389729 podStartE2EDuration="2.167389729s" podCreationTimestamp="2026-03-14 09:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:43.162028206 +0000 UTC m=+1548.150268581" watchObservedRunningTime="2026-03-14 09:22:43.167389729 +0000 UTC m=+1548.155630104" Mar 14 09:22:44 crc kubenswrapper[4687]: I0314 09:22:44.737466 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:44 crc kubenswrapper[4687]: E0314 09:22:44.737945 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:45 crc kubenswrapper[4687]: I0314 09:22:45.790935 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:22:50 crc kubenswrapper[4687]: I0314 09:22:50.530111 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:22:50 crc kubenswrapper[4687]: I0314 09:22:50.530763 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:22:50 crc kubenswrapper[4687]: I0314 09:22:50.791369 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:22:50 crc kubenswrapper[4687]: I0314 09:22:50.819372 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:22:51 crc kubenswrapper[4687]: I0314 09:22:51.247968 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:22:51 crc kubenswrapper[4687]: I0314 09:22:51.537281 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:51 crc kubenswrapper[4687]: I0314 09:22:51.537387 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:51 crc kubenswrapper[4687]: I0314 09:22:51.544502 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f62246-5351-422e-8e28-fe9926c7dd39" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:51 crc kubenswrapper[4687]: I0314 09:22:51.546173 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f62246-5351-422e-8e28-fe9926c7dd39" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:52 crc kubenswrapper[4687]: I0314 09:22:52.554543 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="47f5a5c0-8928-4d3e-9098-a3338401c52e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.238:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:52 crc kubenswrapper[4687]: I0314 09:22:52.554541 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="47f5a5c0-8928-4d3e-9098-a3338401c52e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.238:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:54 crc kubenswrapper[4687]: I0314 09:22:54.111398 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:22:54 crc kubenswrapper[4687]: I0314 09:22:54.111455 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:22:55 crc kubenswrapper[4687]: I0314 09:22:55.316602 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:22:55 crc kubenswrapper[4687]: I0314 09:22:55.745754 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:22:55 crc kubenswrapper[4687]: E0314 09:22:55.746315 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:22:58 crc kubenswrapper[4687]: I0314 09:22:58.529697 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:58 crc kubenswrapper[4687]: I0314 09:22:58.531326 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:58 crc kubenswrapper[4687]: I0314 09:22:58.736984 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:22:58 crc kubenswrapper[4687]: E0314 09:22:58.737305 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:22:59 crc kubenswrapper[4687]: I0314 09:22:59.536891 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:22:59 crc kubenswrapper[4687]: I0314 09:22:59.537632 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:23:00 crc kubenswrapper[4687]: I0314 09:23:00.538440 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:00 crc kubenswrapper[4687]: I0314 09:23:00.539556 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:00 crc kubenswrapper[4687]: I0314 09:23:00.542947 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:01 crc kubenswrapper[4687]: I0314 09:23:01.344101 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:01 crc kubenswrapper[4687]: I0314 09:23:01.552201 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:01 crc kubenswrapper[4687]: I0314 09:23:01.561708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:01 crc kubenswrapper[4687]: I0314 09:23:01.572548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:02 crc kubenswrapper[4687]: I0314 09:23:02.357724 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:06 crc kubenswrapper[4687]: I0314 09:23:06.737078 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:23:06 crc kubenswrapper[4687]: E0314 09:23:06.737543 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:23:09 crc kubenswrapper[4687]: I0314 09:23:09.737093 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:23:09 crc kubenswrapper[4687]: E0314 09:23:09.737800 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:23:17 crc kubenswrapper[4687]: I0314 09:23:17.737405 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:23:18 crc kubenswrapper[4687]: I0314 09:23:18.525161 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40"} Mar 14 09:23:22 crc kubenswrapper[4687]: I0314 09:23:22.128099 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:23:22 crc kubenswrapper[4687]: I0314 09:23:22.128597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:23:22 crc kubenswrapper[4687]: I0314 09:23:22.736775 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:23:23 crc kubenswrapper[4687]: I0314 09:23:23.589500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955"} Mar 14 09:23:24 crc kubenswrapper[4687]: I0314 09:23:24.111139 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:23:24 crc kubenswrapper[4687]: I0314 09:23:24.111477 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:23:27 crc kubenswrapper[4687]: I0314 09:23:27.645943 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" exitCode=1 Mar 14 09:23:27 crc kubenswrapper[4687]: I0314 09:23:27.646004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40"} Mar 14 09:23:27 crc kubenswrapper[4687]: I0314 09:23:27.646452 4687 scope.go:117] "RemoveContainer" containerID="e6949a5ff3681ff8c1b573408d5384cb176fe756c0a40bdf938c91c19ffbf32a" Mar 14 09:23:27 crc kubenswrapper[4687]: I0314 09:23:27.647758 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:23:27 crc kubenswrapper[4687]: E0314 09:23:27.648066 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.128243 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.128680 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.131628 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:23:32 crc kubenswrapper[4687]: E0314 09:23:32.131899 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.220094 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.220138 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.698001 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" exitCode=1 Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.698042 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955"} Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.698081 4687 scope.go:117] "RemoveContainer" containerID="ca18676f20764262f2af6ebb28cf216a083ec3b5fc47e5a78e2911b585b52fbc" Mar 14 09:23:32 crc kubenswrapper[4687]: I0314 09:23:32.699121 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:23:32 crc kubenswrapper[4687]: E0314 09:23:32.699592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:23:42 crc kubenswrapper[4687]: I0314 09:23:42.220631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:23:42 crc kubenswrapper[4687]: I0314 09:23:42.221279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:23:42 crc kubenswrapper[4687]: I0314 09:23:42.222277 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:23:42 crc kubenswrapper[4687]: E0314 09:23:42.222749 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:23:46 crc kubenswrapper[4687]: I0314 09:23:46.737556 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:23:46 crc kubenswrapper[4687]: E0314 09:23:46.738366 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:23:53 crc kubenswrapper[4687]: I0314 09:23:53.737312 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:23:53 crc kubenswrapper[4687]: E0314 09:23:53.737923 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.111151 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.111208 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.111250 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.112110 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.112170 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" gracePeriod=600 Mar 14 09:23:54 crc kubenswrapper[4687]: E0314 09:23:54.233315 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.923506 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" exitCode=0 Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.923571 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade"} Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.923864 4687 scope.go:117] "RemoveContainer" containerID="4f6a676f9e9de0d38b14585a407f7c4a7ec4d7e826880293ed38864981eee9b4" Mar 14 09:23:54 crc kubenswrapper[4687]: I0314 09:23:54.924572 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:23:54 crc kubenswrapper[4687]: E0314 09:23:54.924945 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:23:58 crc kubenswrapper[4687]: I0314 09:23:58.737222 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:23:58 crc kubenswrapper[4687]: E0314 09:23:58.737973 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.141174 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558004-q8g5l"] Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.142986 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.145195 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.145427 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.145747 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.151498 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-q8g5l"] Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.313685 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54d47\" (UniqueName: \"kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47\") pod \"auto-csr-approver-29558004-q8g5l\" (UID: \"1581a888-afb5-495e-a425-77118d106d2a\") " pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.415802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54d47\" (UniqueName: \"kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47\") pod \"auto-csr-approver-29558004-q8g5l\" (UID: \"1581a888-afb5-495e-a425-77118d106d2a\") " pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.441787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54d47\" (UniqueName: \"kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47\") pod \"auto-csr-approver-29558004-q8g5l\" (UID: \"1581a888-afb5-495e-a425-77118d106d2a\") " pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.471957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:00 crc kubenswrapper[4687]: I0314 09:24:00.975186 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-q8g5l"] Mar 14 09:24:01 crc kubenswrapper[4687]: I0314 09:24:01.994391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" event={"ID":"1581a888-afb5-495e-a425-77118d106d2a","Type":"ContainerStarted","Data":"007b7533a8252549ec228bacc125894c851a768d63593c94422fe4dedc3d7e89"} Mar 14 09:24:03 crc kubenswrapper[4687]: I0314 09:24:03.004004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" event={"ID":"1581a888-afb5-495e-a425-77118d106d2a","Type":"ContainerStarted","Data":"0f93407d0853b70fd9f4576b8599566b7e84a8f11c512a577417c0b06f788d41"} Mar 14 09:24:03 crc kubenswrapper[4687]: I0314 09:24:03.018834 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" podStartSLOduration=1.5503146 podStartE2EDuration="3.018813986s" podCreationTimestamp="2026-03-14 09:24:00 +0000 UTC" firstStartedPulling="2026-03-14 09:24:00.984503738 +0000 UTC m=+1625.972744113" lastFinishedPulling="2026-03-14 09:24:02.453003134 +0000 UTC m=+1627.441243499" observedRunningTime="2026-03-14 09:24:03.017999486 +0000 UTC m=+1628.006239861" watchObservedRunningTime="2026-03-14 09:24:03.018813986 +0000 UTC m=+1628.007054361" Mar 14 09:24:04 crc kubenswrapper[4687]: I0314 09:24:04.028026 4687 generic.go:334] "Generic (PLEG): container finished" podID="1581a888-afb5-495e-a425-77118d106d2a" containerID="0f93407d0853b70fd9f4576b8599566b7e84a8f11c512a577417c0b06f788d41" exitCode=0 Mar 14 09:24:04 crc kubenswrapper[4687]: I0314 09:24:04.028106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" event={"ID":"1581a888-afb5-495e-a425-77118d106d2a","Type":"ContainerDied","Data":"0f93407d0853b70fd9f4576b8599566b7e84a8f11c512a577417c0b06f788d41"} Mar 14 09:24:05 crc kubenswrapper[4687]: I0314 09:24:05.435132 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:05 crc kubenswrapper[4687]: I0314 09:24:05.526427 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54d47\" (UniqueName: \"kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47\") pod \"1581a888-afb5-495e-a425-77118d106d2a\" (UID: \"1581a888-afb5-495e-a425-77118d106d2a\") " Mar 14 09:24:05 crc kubenswrapper[4687]: I0314 09:24:05.534907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47" (OuterVolumeSpecName: "kube-api-access-54d47") pod "1581a888-afb5-495e-a425-77118d106d2a" (UID: "1581a888-afb5-495e-a425-77118d106d2a"). InnerVolumeSpecName "kube-api-access-54d47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:24:05 crc kubenswrapper[4687]: I0314 09:24:05.628627 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54d47\" (UniqueName: \"kubernetes.io/projected/1581a888-afb5-495e-a425-77118d106d2a-kube-api-access-54d47\") on node \"crc\" DevicePath \"\"" Mar 14 09:24:05 crc kubenswrapper[4687]: I0314 09:24:05.749673 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:24:05 crc kubenswrapper[4687]: E0314 09:24:05.750198 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:24:06 crc kubenswrapper[4687]: I0314 09:24:06.050242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" event={"ID":"1581a888-afb5-495e-a425-77118d106d2a","Type":"ContainerDied","Data":"007b7533a8252549ec228bacc125894c851a768d63593c94422fe4dedc3d7e89"} Mar 14 09:24:06 crc kubenswrapper[4687]: I0314 09:24:06.050696 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007b7533a8252549ec228bacc125894c851a768d63593c94422fe4dedc3d7e89" Mar 14 09:24:06 crc kubenswrapper[4687]: I0314 09:24:06.050365 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-q8g5l" Mar 14 09:24:06 crc kubenswrapper[4687]: I0314 09:24:06.098905 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-pjvpw"] Mar 14 09:24:06 crc kubenswrapper[4687]: I0314 09:24:06.116159 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-pjvpw"] Mar 14 09:24:07 crc kubenswrapper[4687]: I0314 09:24:07.747681 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7baed2-71ff-425e-92e0-da1afa67a430" path="/var/lib/kubelet/pods/5e7baed2-71ff-425e-92e0-da1afa67a430/volumes" Mar 14 09:24:09 crc kubenswrapper[4687]: I0314 09:24:09.737442 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:24:09 crc kubenswrapper[4687]: E0314 09:24:09.738009 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:24:10 crc kubenswrapper[4687]: I0314 09:24:10.737932 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:24:10 crc kubenswrapper[4687]: E0314 09:24:10.739037 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:24:18 crc kubenswrapper[4687]: I0314 09:24:18.737141 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:24:18 crc kubenswrapper[4687]: E0314 09:24:18.737985 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:24:22 crc kubenswrapper[4687]: I0314 09:24:22.736843 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:24:22 crc kubenswrapper[4687]: E0314 09:24:22.737259 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:24:25 crc kubenswrapper[4687]: I0314 09:24:25.743936 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:24:25 crc kubenswrapper[4687]: E0314 09:24:25.744755 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:24:26 crc kubenswrapper[4687]: I0314 09:24:26.062441 4687 scope.go:117] "RemoveContainer" containerID="8c0c9f182cb39763cb31ee985e82b752908184f0f6b7bb1f7553b495ec2351dc" Mar 14 09:24:26 crc kubenswrapper[4687]: I0314 09:24:26.101741 4687 scope.go:117] "RemoveContainer" containerID="3f55d235a92f9d52cf4adcf065a8a0456bd8d230d521d9b6c04f46785639ee87" Mar 14 09:24:26 crc kubenswrapper[4687]: I0314 09:24:26.166632 4687 scope.go:117] "RemoveContainer" containerID="3d7fbf752efbfbe638129ff99c6c0aba52b9e58e97675ee51b6d45424a773e69" Mar 14 09:24:31 crc kubenswrapper[4687]: I0314 09:24:31.737549 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:24:31 crc kubenswrapper[4687]: E0314 09:24:31.738369 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:24:35 crc kubenswrapper[4687]: I0314 09:24:35.747467 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:24:35 crc kubenswrapper[4687]: E0314 09:24:35.748233 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:24:38 crc kubenswrapper[4687]: I0314 09:24:38.737658 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:24:38 crc kubenswrapper[4687]: E0314 09:24:38.738358 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:24:44 crc kubenswrapper[4687]: I0314 09:24:44.736821 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:24:44 crc kubenswrapper[4687]: E0314 09:24:44.737600 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:24:49 crc kubenswrapper[4687]: I0314 09:24:49.737223 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:24:49 crc kubenswrapper[4687]: E0314 09:24:49.738153 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:24:52 crc kubenswrapper[4687]: I0314 09:24:52.737787 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:24:52 crc kubenswrapper[4687]: E0314 09:24:52.738654 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.244516 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:24:53 crc kubenswrapper[4687]: E0314 09:24:53.244897 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1581a888-afb5-495e-a425-77118d106d2a" containerName="oc" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.244913 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1581a888-afb5-495e-a425-77118d106d2a" containerName="oc" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.245100 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1581a888-afb5-495e-a425-77118d106d2a" containerName="oc" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.246864 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.256021 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.415972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.416158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphgk\" (UniqueName: \"kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.416366 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.517895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.517963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.518460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.518538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.518688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphgk\" (UniqueName: \"kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.538493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphgk\" (UniqueName: \"kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk\") pod \"community-operators-x7mx5\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:53 crc kubenswrapper[4687]: I0314 09:24:53.572203 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:24:54 crc kubenswrapper[4687]: I0314 09:24:54.062282 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:24:54 crc kubenswrapper[4687]: W0314 09:24:54.072675 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643b1200_7a7a_485c_bdb7_6021594042cb.slice/crio-d3ddb5858d43b62ba39ce596ea9a1ceffd3828a399e72a2eef345b19a0023302 WatchSource:0}: Error finding container d3ddb5858d43b62ba39ce596ea9a1ceffd3828a399e72a2eef345b19a0023302: Status 404 returned error can't find the container with id d3ddb5858d43b62ba39ce596ea9a1ceffd3828a399e72a2eef345b19a0023302 Mar 14 09:24:54 crc kubenswrapper[4687]: I0314 09:24:54.553284 4687 generic.go:334] "Generic (PLEG): container finished" podID="643b1200-7a7a-485c-bdb7-6021594042cb" containerID="ac439b26e45ec934088aa85a2a06d98c8cc61447715d7aa5935bd79fdc3d8fe5" exitCode=0 Mar 14 09:24:54 crc kubenswrapper[4687]: I0314 09:24:54.553341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerDied","Data":"ac439b26e45ec934088aa85a2a06d98c8cc61447715d7aa5935bd79fdc3d8fe5"} Mar 14 09:24:54 crc kubenswrapper[4687]: I0314 09:24:54.553368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerStarted","Data":"d3ddb5858d43b62ba39ce596ea9a1ceffd3828a399e72a2eef345b19a0023302"} Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.565440 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerStarted","Data":"d7e432b33b1a21379cf3f0149b0948f6d42547532b6ff89fc6b5327d076f9a3c"} Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.839559 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.850696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.857247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.965756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5x8w\" (UniqueName: \"kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.965895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:55 crc kubenswrapper[4687]: I0314 09:24:55.966000 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.067419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.067495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5x8w\" (UniqueName: \"kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.067613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.068124 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.068368 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.088130 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5x8w\" (UniqueName: \"kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w\") pod \"redhat-marketplace-g4zzz\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.225923 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.577575 4687 generic.go:334] "Generic (PLEG): container finished" podID="643b1200-7a7a-485c-bdb7-6021594042cb" containerID="d7e432b33b1a21379cf3f0149b0948f6d42547532b6ff89fc6b5327d076f9a3c" exitCode=0 Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.577772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerDied","Data":"d7e432b33b1a21379cf3f0149b0948f6d42547532b6ff89fc6b5327d076f9a3c"} Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.736644 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:24:56 crc kubenswrapper[4687]: E0314 09:24:56.736902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:24:56 crc kubenswrapper[4687]: I0314 09:24:56.748042 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:24:56 crc kubenswrapper[4687]: W0314 09:24:56.751678 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf558ba_2fe9_4697_b74a_09da7b98bf4d.slice/crio-ebf490affd24f76eab141d18ff354a87a7d38886f4f90cf83bdd83e52c52aa49 WatchSource:0}: Error finding container ebf490affd24f76eab141d18ff354a87a7d38886f4f90cf83bdd83e52c52aa49: Status 404 returned error can't find the container with id ebf490affd24f76eab141d18ff354a87a7d38886f4f90cf83bdd83e52c52aa49 Mar 14 09:24:57 crc kubenswrapper[4687]: I0314 09:24:57.649824 4687 generic.go:334] "Generic (PLEG): container finished" podID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerID="386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049" exitCode=0 Mar 14 09:24:57 crc kubenswrapper[4687]: I0314 09:24:57.649869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerDied","Data":"386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049"} Mar 14 09:24:57 crc kubenswrapper[4687]: I0314 09:24:57.650158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerStarted","Data":"ebf490affd24f76eab141d18ff354a87a7d38886f4f90cf83bdd83e52c52aa49"} Mar 14 09:24:57 crc kubenswrapper[4687]: I0314 09:24:57.654502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerStarted","Data":"192828ffd92aa05d01f2b3c0705153a0299199bedb5634a626f6732ab08f85a7"} Mar 14 09:24:57 crc kubenswrapper[4687]: I0314 09:24:57.688260 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7mx5" podStartSLOduration=2.241861169 podStartE2EDuration="4.688240578s" podCreationTimestamp="2026-03-14 09:24:53 +0000 UTC" firstStartedPulling="2026-03-14 09:24:54.555808391 +0000 UTC m=+1679.544048766" lastFinishedPulling="2026-03-14 09:24:57.0021878 +0000 UTC m=+1681.990428175" observedRunningTime="2026-03-14 09:24:57.684169818 +0000 UTC m=+1682.672410193" watchObservedRunningTime="2026-03-14 09:24:57.688240578 +0000 UTC m=+1682.676480953" Mar 14 09:24:58 crc kubenswrapper[4687]: I0314 09:24:58.666542 4687 generic.go:334] "Generic (PLEG): container finished" podID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerID="f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e" exitCode=0 Mar 14 09:24:58 crc kubenswrapper[4687]: I0314 09:24:58.666596 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerDied","Data":"f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e"} Mar 14 09:24:59 crc kubenswrapper[4687]: I0314 09:24:59.682769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerStarted","Data":"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10"} Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.222938 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g4zzz" podStartSLOduration=5.424310969 podStartE2EDuration="7.222917322s" podCreationTimestamp="2026-03-14 09:24:55 +0000 UTC" firstStartedPulling="2026-03-14 09:24:57.652105309 +0000 UTC m=+1682.640345684" lastFinishedPulling="2026-03-14 09:24:59.450711662 +0000 UTC m=+1684.438952037" observedRunningTime="2026-03-14 09:24:59.699698164 +0000 UTC m=+1684.687938539" watchObservedRunningTime="2026-03-14 09:25:02.222917322 +0000 UTC m=+1687.211157707" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.231964 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.235130 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.252717 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.333586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk8vn\" (UniqueName: \"kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.334027 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.334289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.435792 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk8vn\" (UniqueName: \"kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.436115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.436240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.436795 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.436854 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.460692 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk8vn\" (UniqueName: \"kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn\") pod \"certified-operators-vv26h\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.565802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:02 crc kubenswrapper[4687]: I0314 09:25:02.737447 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:25:02 crc kubenswrapper[4687]: E0314 09:25:02.738132 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:25:03 crc kubenswrapper[4687]: W0314 09:25:03.075309 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb6f2aba_1b9a_41a6_9c77_f469c897aa9f.slice/crio-c09ee7fef072b5ea71b5179820bb50edccd25942008f0a9c003ef805fe7b059a WatchSource:0}: Error finding container c09ee7fef072b5ea71b5179820bb50edccd25942008f0a9c003ef805fe7b059a: Status 404 returned error can't find the container with id c09ee7fef072b5ea71b5179820bb50edccd25942008f0a9c003ef805fe7b059a Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.085273 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.573296 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.573365 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.733844 4687 generic.go:334] "Generic (PLEG): container finished" podID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerID="e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e" exitCode=0 Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.733891 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerDied","Data":"e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e"} Mar 14 09:25:03 crc kubenswrapper[4687]: I0314 09:25:03.733921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerStarted","Data":"c09ee7fef072b5ea71b5179820bb50edccd25942008f0a9c003ef805fe7b059a"} Mar 14 09:25:04 crc kubenswrapper[4687]: I0314 09:25:04.627959 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x7mx5" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="registry-server" probeResult="failure" output=< Mar 14 09:25:04 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:25:04 crc kubenswrapper[4687]: > Mar 14 09:25:04 crc kubenswrapper[4687]: I0314 09:25:04.737193 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:25:04 crc kubenswrapper[4687]: E0314 09:25:04.737669 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:25:04 crc kubenswrapper[4687]: I0314 09:25:04.750214 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerStarted","Data":"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6"} Mar 14 09:25:05 crc kubenswrapper[4687]: I0314 09:25:05.773356 4687 generic.go:334] "Generic (PLEG): container finished" podID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerID="972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6" exitCode=0 Mar 14 09:25:05 crc kubenswrapper[4687]: I0314 09:25:05.773399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerDied","Data":"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6"} Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.226843 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.227205 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.278123 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.795927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerStarted","Data":"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da"} Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.842499 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:06 crc kubenswrapper[4687]: I0314 09:25:06.870814 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vv26h" podStartSLOduration=2.431121416 podStartE2EDuration="4.870796979s" podCreationTimestamp="2026-03-14 09:25:02 +0000 UTC" firstStartedPulling="2026-03-14 09:25:03.735352899 +0000 UTC m=+1688.723593274" lastFinishedPulling="2026-03-14 09:25:06.175028462 +0000 UTC m=+1691.163268837" observedRunningTime="2026-03-14 09:25:06.815689145 +0000 UTC m=+1691.803929520" watchObservedRunningTime="2026-03-14 09:25:06.870796979 +0000 UTC m=+1691.859037354" Mar 14 09:25:08 crc kubenswrapper[4687]: I0314 09:25:08.626993 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:25:08 crc kubenswrapper[4687]: I0314 09:25:08.815549 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g4zzz" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="registry-server" containerID="cri-o://a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10" gracePeriod=2 Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.333202 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.479211 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5x8w\" (UniqueName: \"kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w\") pod \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.479294 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content\") pod \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.479357 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities\") pod \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\" (UID: \"1bf558ba-2fe9-4697-b74a-09da7b98bf4d\") " Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.480746 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities" (OuterVolumeSpecName: "utilities") pod "1bf558ba-2fe9-4697-b74a-09da7b98bf4d" (UID: "1bf558ba-2fe9-4697-b74a-09da7b98bf4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.487743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w" (OuterVolumeSpecName: "kube-api-access-d5x8w") pod "1bf558ba-2fe9-4697-b74a-09da7b98bf4d" (UID: "1bf558ba-2fe9-4697-b74a-09da7b98bf4d"). InnerVolumeSpecName "kube-api-access-d5x8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.502429 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bf558ba-2fe9-4697-b74a-09da7b98bf4d" (UID: "1bf558ba-2fe9-4697-b74a-09da7b98bf4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.581356 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5x8w\" (UniqueName: \"kubernetes.io/projected/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-kube-api-access-d5x8w\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.581621 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.581686 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf558ba-2fe9-4697-b74a-09da7b98bf4d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.826086 4687 generic.go:334] "Generic (PLEG): container finished" podID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerID="a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10" exitCode=0 Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.826129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerDied","Data":"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10"} Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.826158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4zzz" event={"ID":"1bf558ba-2fe9-4697-b74a-09da7b98bf4d","Type":"ContainerDied","Data":"ebf490affd24f76eab141d18ff354a87a7d38886f4f90cf83bdd83e52c52aa49"} Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.826182 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4zzz" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.826189 4687 scope.go:117] "RemoveContainer" containerID="a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.853615 4687 scope.go:117] "RemoveContainer" containerID="f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.858423 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.868407 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4zzz"] Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.877434 4687 scope.go:117] "RemoveContainer" containerID="386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.920600 4687 scope.go:117] "RemoveContainer" containerID="a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10" Mar 14 09:25:09 crc kubenswrapper[4687]: E0314 09:25:09.920995 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10\": container with ID starting with a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10 not found: ID does not exist" containerID="a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.921028 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10"} err="failed to get container status \"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10\": rpc error: code = NotFound desc = could not find container \"a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10\": container with ID starting with a70486f15d401286ea92717514c05e9a4034d58ddd67254c6c033900ab347c10 not found: ID does not exist" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.921057 4687 scope.go:117] "RemoveContainer" containerID="f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e" Mar 14 09:25:09 crc kubenswrapper[4687]: E0314 09:25:09.921539 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e\": container with ID starting with f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e not found: ID does not exist" containerID="f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.921608 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e"} err="failed to get container status \"f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e\": rpc error: code = NotFound desc = could not find container \"f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e\": container with ID starting with f3d61ff7088a6a093285f4dc645221c3f60447d86fbb0b6b5d3dfc13c7fb346e not found: ID does not exist" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.921642 4687 scope.go:117] "RemoveContainer" containerID="386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049" Mar 14 09:25:09 crc kubenswrapper[4687]: E0314 09:25:09.921974 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049\": container with ID starting with 386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049 not found: ID does not exist" containerID="386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049" Mar 14 09:25:09 crc kubenswrapper[4687]: I0314 09:25:09.922001 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049"} err="failed to get container status \"386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049\": rpc error: code = NotFound desc = could not find container \"386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049\": container with ID starting with 386c7a928202361f4a7dc24ebe884cbafeb3452215ba10a79004f63de2f13049 not found: ID does not exist" Mar 14 09:25:11 crc kubenswrapper[4687]: I0314 09:25:11.736961 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:25:11 crc kubenswrapper[4687]: E0314 09:25:11.737628 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:25:11 crc kubenswrapper[4687]: I0314 09:25:11.748805 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" path="/var/lib/kubelet/pods/1bf558ba-2fe9-4697-b74a-09da7b98bf4d/volumes" Mar 14 09:25:12 crc kubenswrapper[4687]: I0314 09:25:12.566426 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:12 crc kubenswrapper[4687]: I0314 09:25:12.566688 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:12 crc kubenswrapper[4687]: I0314 09:25:12.616667 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:12 crc kubenswrapper[4687]: I0314 09:25:12.907871 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:13 crc kubenswrapper[4687]: I0314 09:25:13.222123 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:13 crc kubenswrapper[4687]: I0314 09:25:13.625213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:13 crc kubenswrapper[4687]: I0314 09:25:13.678164 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:14 crc kubenswrapper[4687]: I0314 09:25:14.882761 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vv26h" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="registry-server" containerID="cri-o://98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da" gracePeriod=2 Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.424709 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.533504 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities\") pod \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.533655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content\") pod \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.533684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk8vn\" (UniqueName: \"kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn\") pod \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\" (UID: \"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f\") " Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.534457 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities" (OuterVolumeSpecName: "utilities") pod "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" (UID: "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.540515 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn" (OuterVolumeSpecName: "kube-api-access-dk8vn") pod "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" (UID: "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f"). InnerVolumeSpecName "kube-api-access-dk8vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.622466 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.622696 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7mx5" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="registry-server" containerID="cri-o://192828ffd92aa05d01f2b3c0705153a0299199bedb5634a626f6732ab08f85a7" gracePeriod=2 Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.635757 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.635986 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk8vn\" (UniqueName: \"kubernetes.io/projected/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-kube-api-access-dk8vn\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.895724 4687 generic.go:334] "Generic (PLEG): container finished" podID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerID="98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da" exitCode=0 Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.895807 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv26h" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.895826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerDied","Data":"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da"} Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.896160 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv26h" event={"ID":"eb6f2aba-1b9a-41a6-9c77-f469c897aa9f","Type":"ContainerDied","Data":"c09ee7fef072b5ea71b5179820bb50edccd25942008f0a9c003ef805fe7b059a"} Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.896192 4687 scope.go:117] "RemoveContainer" containerID="98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.919256 4687 scope.go:117] "RemoveContainer" containerID="972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.952667 4687 scope.go:117] "RemoveContainer" containerID="e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.988741 4687 scope.go:117] "RemoveContainer" containerID="98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da" Mar 14 09:25:15 crc kubenswrapper[4687]: E0314 09:25:15.989230 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da\": container with ID starting with 98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da not found: ID does not exist" containerID="98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.989270 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da"} err="failed to get container status \"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da\": rpc error: code = NotFound desc = could not find container \"98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da\": container with ID starting with 98dc826f79795bf2f538c4f7a1dc808e70329c14ad49c4e2b7aeab0f795367da not found: ID does not exist" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.989300 4687 scope.go:117] "RemoveContainer" containerID="972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6" Mar 14 09:25:15 crc kubenswrapper[4687]: E0314 09:25:15.989987 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6\": container with ID starting with 972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6 not found: ID does not exist" containerID="972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.990016 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6"} err="failed to get container status \"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6\": rpc error: code = NotFound desc = could not find container \"972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6\": container with ID starting with 972d078ea7a577ea583838e0bc2b1c67e49d0d87d7317c60bbe68e75952fe7f6 not found: ID does not exist" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.990035 4687 scope.go:117] "RemoveContainer" containerID="e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e" Mar 14 09:25:15 crc kubenswrapper[4687]: E0314 09:25:15.990448 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e\": container with ID starting with e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e not found: ID does not exist" containerID="e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e" Mar 14 09:25:15 crc kubenswrapper[4687]: I0314 09:25:15.990467 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e"} err="failed to get container status \"e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e\": rpc error: code = NotFound desc = could not find container \"e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e\": container with ID starting with e18784dd1b572492a6bd1afa61116202f38b25ac1b98239b68988a9687eeb98e not found: ID does not exist" Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.442714 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" (UID: "eb6f2aba-1b9a-41a6-9c77-f469c897aa9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.450074 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.526480 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.534773 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vv26h"] Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.736769 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:25:16 crc kubenswrapper[4687]: E0314 09:25:16.737132 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.909379 4687 generic.go:334] "Generic (PLEG): container finished" podID="643b1200-7a7a-485c-bdb7-6021594042cb" containerID="192828ffd92aa05d01f2b3c0705153a0299199bedb5634a626f6732ab08f85a7" exitCode=0 Mar 14 09:25:16 crc kubenswrapper[4687]: I0314 09:25:16.909450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerDied","Data":"192828ffd92aa05d01f2b3c0705153a0299199bedb5634a626f6732ab08f85a7"} Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.227971 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.264976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content\") pod \"643b1200-7a7a-485c-bdb7-6021594042cb\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.265059 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities\") pod \"643b1200-7a7a-485c-bdb7-6021594042cb\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.265282 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kphgk\" (UniqueName: \"kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk\") pod \"643b1200-7a7a-485c-bdb7-6021594042cb\" (UID: \"643b1200-7a7a-485c-bdb7-6021594042cb\") " Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.267103 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities" (OuterVolumeSpecName: "utilities") pod "643b1200-7a7a-485c-bdb7-6021594042cb" (UID: "643b1200-7a7a-485c-bdb7-6021594042cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.267626 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.281293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk" (OuterVolumeSpecName: "kube-api-access-kphgk") pod "643b1200-7a7a-485c-bdb7-6021594042cb" (UID: "643b1200-7a7a-485c-bdb7-6021594042cb"). InnerVolumeSpecName "kube-api-access-kphgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.325254 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "643b1200-7a7a-485c-bdb7-6021594042cb" (UID: "643b1200-7a7a-485c-bdb7-6021594042cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.368901 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kphgk\" (UniqueName: \"kubernetes.io/projected/643b1200-7a7a-485c-bdb7-6021594042cb-kube-api-access-kphgk\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.368940 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643b1200-7a7a-485c-bdb7-6021594042cb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.737965 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:25:17 crc kubenswrapper[4687]: E0314 09:25:17.738428 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.749966 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" path="/var/lib/kubelet/pods/eb6f2aba-1b9a-41a6-9c77-f469c897aa9f/volumes" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.922581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7mx5" event={"ID":"643b1200-7a7a-485c-bdb7-6021594042cb","Type":"ContainerDied","Data":"d3ddb5858d43b62ba39ce596ea9a1ceffd3828a399e72a2eef345b19a0023302"} Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.922924 4687 scope.go:117] "RemoveContainer" containerID="192828ffd92aa05d01f2b3c0705153a0299199bedb5634a626f6732ab08f85a7" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.922692 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7mx5" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.948654 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.951440 4687 scope.go:117] "RemoveContainer" containerID="d7e432b33b1a21379cf3f0149b0948f6d42547532b6ff89fc6b5327d076f9a3c" Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.959005 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7mx5"] Mar 14 09:25:17 crc kubenswrapper[4687]: I0314 09:25:17.981945 4687 scope.go:117] "RemoveContainer" containerID="ac439b26e45ec934088aa85a2a06d98c8cc61447715d7aa5935bd79fdc3d8fe5" Mar 14 09:25:19 crc kubenswrapper[4687]: I0314 09:25:19.750215 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" path="/var/lib/kubelet/pods/643b1200-7a7a-485c-bdb7-6021594042cb/volumes" Mar 14 09:25:22 crc kubenswrapper[4687]: I0314 09:25:22.737638 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:25:22 crc kubenswrapper[4687]: E0314 09:25:22.738526 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:25:26 crc kubenswrapper[4687]: I0314 09:25:26.265187 4687 scope.go:117] "RemoveContainer" containerID="9e957185f147153546d32cb56518a5184ff153b55baa0615db38cdd9d952bd0e" Mar 14 09:25:26 crc kubenswrapper[4687]: I0314 09:25:26.294467 4687 scope.go:117] "RemoveContainer" containerID="6b34699a48807d1be528b0f4a03ac614f0f1cbba9aa9ff96331416b911e21237" Mar 14 09:25:26 crc kubenswrapper[4687]: I0314 09:25:26.317697 4687 scope.go:117] "RemoveContainer" containerID="70c4f9a8b8ff077864fb7796ac21b41e81c7b418a2d12f9fd924165927898f89" Mar 14 09:25:26 crc kubenswrapper[4687]: I0314 09:25:26.359237 4687 scope.go:117] "RemoveContainer" containerID="d13b9b86d742558e3021969d763ff7e111ebadfba47f22ede524574674be9c19" Mar 14 09:25:30 crc kubenswrapper[4687]: I0314 09:25:30.736847 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:25:30 crc kubenswrapper[4687]: E0314 09:25:30.737600 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:25:31 crc kubenswrapper[4687]: I0314 09:25:31.738514 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:25:31 crc kubenswrapper[4687]: E0314 09:25:31.742956 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:25:37 crc kubenswrapper[4687]: I0314 09:25:37.738027 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:25:37 crc kubenswrapper[4687]: E0314 09:25:37.739472 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:25:45 crc kubenswrapper[4687]: I0314 09:25:45.744738 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:25:45 crc kubenswrapper[4687]: I0314 09:25:45.745354 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:25:45 crc kubenswrapper[4687]: E0314 09:25:45.745506 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:25:45 crc kubenswrapper[4687]: E0314 09:25:45.745607 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:25:48 crc kubenswrapper[4687]: I0314 09:25:48.756031 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:25:48 crc kubenswrapper[4687]: E0314 09:25:48.759413 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.152205 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558006-pf6q4"] Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153262 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153282 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153292 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153300 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153315 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153323 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153433 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153443 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153461 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153468 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153484 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153492 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153510 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153517 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153548 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153556 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.153571 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153578 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153862 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf558ba-2fe9-4697-b74a-09da7b98bf4d" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153879 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="643b1200-7a7a-485c-bdb7-6021594042cb" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.153892 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6f2aba-1b9a-41a6-9c77-f469c897aa9f" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.154633 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.157123 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.157292 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.159814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.161282 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-pf6q4"] Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.250560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfgw\" (UniqueName: \"kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw\") pod \"auto-csr-approver-29558006-pf6q4\" (UID: \"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226\") " pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.352198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrfgw\" (UniqueName: \"kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw\") pod \"auto-csr-approver-29558006-pf6q4\" (UID: \"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226\") " pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.370209 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrfgw\" (UniqueName: \"kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw\") pod \"auto-csr-approver-29558006-pf6q4\" (UID: \"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226\") " pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.474947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.737321 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.737717 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.737887 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:26:00 crc kubenswrapper[4687]: E0314 09:26:00.737948 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:00 crc kubenswrapper[4687]: I0314 09:26:00.957926 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-pf6q4"] Mar 14 09:26:01 crc kubenswrapper[4687]: I0314 09:26:01.401525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" event={"ID":"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226","Type":"ContainerStarted","Data":"8e41226ac6741f00c781dcf5f3f698ffdf24f99891cc235c73f0aa33f8ea71f5"} Mar 14 09:26:01 crc kubenswrapper[4687]: I0314 09:26:01.737596 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:26:01 crc kubenswrapper[4687]: E0314 09:26:01.738242 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:02 crc kubenswrapper[4687]: I0314 09:26:02.414985 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" containerID="aa83c67d87b241aaaaa15361432b0c804b09b87482560e9f335d4945124d06c5" exitCode=0 Mar 14 09:26:02 crc kubenswrapper[4687]: I0314 09:26:02.415079 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" event={"ID":"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226","Type":"ContainerDied","Data":"aa83c67d87b241aaaaa15361432b0c804b09b87482560e9f335d4945124d06c5"} Mar 14 09:26:03 crc kubenswrapper[4687]: I0314 09:26:03.764232 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:03 crc kubenswrapper[4687]: I0314 09:26:03.822728 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrfgw\" (UniqueName: \"kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw\") pod \"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226\" (UID: \"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226\") " Mar 14 09:26:03 crc kubenswrapper[4687]: I0314 09:26:03.829688 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw" (OuterVolumeSpecName: "kube-api-access-wrfgw") pod "0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" (UID: "0a14e0f7-3af5-4c63-91d1-e9c50ab5a226"). InnerVolumeSpecName "kube-api-access-wrfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:26:03 crc kubenswrapper[4687]: I0314 09:26:03.926380 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrfgw\" (UniqueName: \"kubernetes.io/projected/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226-kube-api-access-wrfgw\") on node \"crc\" DevicePath \"\"" Mar 14 09:26:04 crc kubenswrapper[4687]: I0314 09:26:04.439669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" event={"ID":"0a14e0f7-3af5-4c63-91d1-e9c50ab5a226","Type":"ContainerDied","Data":"8e41226ac6741f00c781dcf5f3f698ffdf24f99891cc235c73f0aa33f8ea71f5"} Mar 14 09:26:04 crc kubenswrapper[4687]: I0314 09:26:04.439714 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e41226ac6741f00c781dcf5f3f698ffdf24f99891cc235c73f0aa33f8ea71f5" Mar 14 09:26:04 crc kubenswrapper[4687]: I0314 09:26:04.439819 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-pf6q4" Mar 14 09:26:04 crc kubenswrapper[4687]: I0314 09:26:04.839010 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-g74b2"] Mar 14 09:26:04 crc kubenswrapper[4687]: I0314 09:26:04.849081 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-g74b2"] Mar 14 09:26:05 crc kubenswrapper[4687]: I0314 09:26:05.756233 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739e6342-59a1-411b-9d90-2fe6f07e4301" path="/var/lib/kubelet/pods/739e6342-59a1-411b-9d90-2fe6f07e4301/volumes" Mar 14 09:26:11 crc kubenswrapper[4687]: I0314 09:26:11.737126 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:26:12 crc kubenswrapper[4687]: I0314 09:26:12.530444 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0"} Mar 14 09:26:12 crc kubenswrapper[4687]: I0314 09:26:12.737428 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:26:13 crc kubenswrapper[4687]: I0314 09:26:13.548663 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508"} Mar 14 09:26:13 crc kubenswrapper[4687]: I0314 09:26:13.737603 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:26:13 crc kubenswrapper[4687]: E0314 09:26:13.737893 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.654546 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" exitCode=1 Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.654617 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508"} Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.655098 4687 scope.go:117] "RemoveContainer" containerID="d06f517c5454879b563b5899e318779f8ea6e7463269d52b20b68e806b473955" Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.655888 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:26:21 crc kubenswrapper[4687]: E0314 09:26:21.656242 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.658571 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" exitCode=1 Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.658608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0"} Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.659544 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:26:21 crc kubenswrapper[4687]: E0314 09:26:21.659830 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:21 crc kubenswrapper[4687]: I0314 09:26:21.859510 4687 scope.go:117] "RemoveContainer" containerID="2689d19c5e05258b578918a3a11388893d6d32ab2f4e1a826f1c41a78ab10c40" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.128144 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.128418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.128568 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.128635 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.220127 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.220534 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.220604 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.220665 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.677416 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:26:22 crc kubenswrapper[4687]: E0314 09:26:22.677750 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:22 crc kubenswrapper[4687]: I0314 09:26:22.680179 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:26:22 crc kubenswrapper[4687]: E0314 09:26:22.680519 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:23 crc kubenswrapper[4687]: I0314 09:26:23.692866 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:26:23 crc kubenswrapper[4687]: I0314 09:26:23.693349 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:26:23 crc kubenswrapper[4687]: E0314 09:26:23.693756 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:23 crc kubenswrapper[4687]: E0314 09:26:23.693820 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:24 crc kubenswrapper[4687]: I0314 09:26:24.737108 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:26:24 crc kubenswrapper[4687]: E0314 09:26:24.737527 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:26:26 crc kubenswrapper[4687]: I0314 09:26:26.553578 4687 scope.go:117] "RemoveContainer" containerID="b60d8f5cfe0d663b0626d8fe209e949615a2fc12bb8b8d6db79d514266675a68" Mar 14 09:26:26 crc kubenswrapper[4687]: I0314 09:26:26.616982 4687 scope.go:117] "RemoveContainer" containerID="af110cf0b8923c49db3abd0ccecab1ebd91ae41e8fddb80fe367fbd1ebb95818" Mar 14 09:26:26 crc kubenswrapper[4687]: I0314 09:26:26.638754 4687 scope.go:117] "RemoveContainer" containerID="2ddbe2f0e9354450cc2bc3ad40ebd49b91e3605492591c35def7479f0d8b4b57" Mar 14 09:26:34 crc kubenswrapper[4687]: I0314 09:26:34.736987 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:26:34 crc kubenswrapper[4687]: E0314 09:26:34.738180 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:36 crc kubenswrapper[4687]: I0314 09:26:36.737409 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:26:36 crc kubenswrapper[4687]: E0314 09:26:36.737985 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:37 crc kubenswrapper[4687]: I0314 09:26:37.737214 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:26:37 crc kubenswrapper[4687]: E0314 09:26:37.737651 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:26:49 crc kubenswrapper[4687]: I0314 09:26:49.737833 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:26:49 crc kubenswrapper[4687]: E0314 09:26:49.738636 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:26:51 crc kubenswrapper[4687]: I0314 09:26:51.737454 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:26:51 crc kubenswrapper[4687]: E0314 09:26:51.738100 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:26:52 crc kubenswrapper[4687]: I0314 09:26:52.737911 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:26:52 crc kubenswrapper[4687]: E0314 09:26:52.738181 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:02 crc kubenswrapper[4687]: I0314 09:27:02.737082 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:27:02 crc kubenswrapper[4687]: E0314 09:27:02.737823 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:27:04 crc kubenswrapper[4687]: I0314 09:27:04.737961 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:27:04 crc kubenswrapper[4687]: E0314 09:27:04.738810 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:27:07 crc kubenswrapper[4687]: I0314 09:27:07.737697 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:27:07 crc kubenswrapper[4687]: E0314 09:27:07.738581 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:14 crc kubenswrapper[4687]: I0314 09:27:14.737299 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:27:14 crc kubenswrapper[4687]: E0314 09:27:14.738134 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:27:15 crc kubenswrapper[4687]: I0314 09:27:15.747290 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:27:15 crc kubenswrapper[4687]: E0314 09:27:15.755224 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:27:21 crc kubenswrapper[4687]: I0314 09:27:21.737009 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:27:21 crc kubenswrapper[4687]: E0314 09:27:21.737692 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:26 crc kubenswrapper[4687]: I0314 09:27:26.719747 4687 scope.go:117] "RemoveContainer" containerID="865d3567bcf2c92db7e231039cb76675dbccf0f5c836c1b9b65d464f09be44c6" Mar 14 09:27:26 crc kubenswrapper[4687]: I0314 09:27:26.751663 4687 scope.go:117] "RemoveContainer" containerID="df4c838df92fb76f13b5b1dc0e8c847a2bd814b11ce4675598e8abbe7e474ea2" Mar 14 09:27:26 crc kubenswrapper[4687]: I0314 09:27:26.774527 4687 scope.go:117] "RemoveContainer" containerID="3fee86fd71cf355fcc4a4d2276f4a04b092c9fe10cd103f3ba00e7bbcad66d89" Mar 14 09:27:26 crc kubenswrapper[4687]: I0314 09:27:26.808558 4687 scope.go:117] "RemoveContainer" containerID="756ed19de753ca91054d4e24cef9483ef99e1944fa58f109718cc83e38d860af" Mar 14 09:27:27 crc kubenswrapper[4687]: I0314 09:27:27.737464 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:27:27 crc kubenswrapper[4687]: I0314 09:27:27.737712 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:27:27 crc kubenswrapper[4687]: E0314 09:27:27.737797 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:27:27 crc kubenswrapper[4687]: E0314 09:27:27.737948 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:27:30 crc kubenswrapper[4687]: I0314 09:27:30.042106 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f783-account-create-update-q84bz"] Mar 14 09:27:30 crc kubenswrapper[4687]: I0314 09:27:30.051899 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-f1ca-account-create-update-rl7f8"] Mar 14 09:27:30 crc kubenswrapper[4687]: I0314 09:27:30.060163 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f783-account-create-update-q84bz"] Mar 14 09:27:30 crc kubenswrapper[4687]: I0314 09:27:30.068697 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-f1ca-account-create-update-rl7f8"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.040367 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ff42-account-create-update-bhzs2"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.051150 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mhb7p"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.060790 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sbqbl"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.069662 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-nhs66"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.077975 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mhb7p"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.088470 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-nhs66"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.098273 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sbqbl"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.108061 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ff42-account-create-update-bhzs2"] Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.753034 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbc215f-2d93-4a3f-9a7f-bb7eca345909" path="/var/lib/kubelet/pods/1bbc215f-2d93-4a3f-9a7f-bb7eca345909/volumes" Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.754041 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0f1160-e2a9-4c87-9a91-24ed3251af19" path="/var/lib/kubelet/pods/2b0f1160-e2a9-4c87-9a91-24ed3251af19/volumes" Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.760489 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e14df9-b415-4748-94bf-ad4278477e9d" path="/var/lib/kubelet/pods/76e14df9-b415-4748-94bf-ad4278477e9d/volumes" Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.761029 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8299fb68-8552-485f-8254-090687b29db6" path="/var/lib/kubelet/pods/8299fb68-8552-485f-8254-090687b29db6/volumes" Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.761586 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf98286-ad74-40b1-87c0-20bcb0881806" path="/var/lib/kubelet/pods/bdf98286-ad74-40b1-87c0-20bcb0881806/volumes" Mar 14 09:27:31 crc kubenswrapper[4687]: I0314 09:27:31.763055 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf146320-67e6-4f93-9256-84353134846e" path="/var/lib/kubelet/pods/cf146320-67e6-4f93-9256-84353134846e/volumes" Mar 14 09:27:33 crc kubenswrapper[4687]: I0314 09:27:33.737397 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:27:33 crc kubenswrapper[4687]: E0314 09:27:33.738252 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:34 crc kubenswrapper[4687]: I0314 09:27:34.056905 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5c0d-account-create-update-flcfj"] Mar 14 09:27:34 crc kubenswrapper[4687]: I0314 09:27:34.074354 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-84mgc"] Mar 14 09:27:34 crc kubenswrapper[4687]: I0314 09:27:34.088123 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5c0d-account-create-update-flcfj"] Mar 14 09:27:34 crc kubenswrapper[4687]: I0314 09:27:34.100400 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-84mgc"] Mar 14 09:27:35 crc kubenswrapper[4687]: I0314 09:27:35.750784 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2036aaf2-2580-421a-979e-d53fabdfbe83" path="/var/lib/kubelet/pods/2036aaf2-2580-421a-979e-d53fabdfbe83/volumes" Mar 14 09:27:35 crc kubenswrapper[4687]: I0314 09:27:35.751861 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5" path="/var/lib/kubelet/pods/6fecb39b-a1b9-4f78-a5ab-a8ea4aadf1c5/volumes" Mar 14 09:27:40 crc kubenswrapper[4687]: I0314 09:27:40.737267 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:27:40 crc kubenswrapper[4687]: E0314 09:27:40.738181 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:27:41 crc kubenswrapper[4687]: I0314 09:27:41.737601 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:27:41 crc kubenswrapper[4687]: E0314 09:27:41.737905 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:27:45 crc kubenswrapper[4687]: I0314 09:27:45.743787 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:27:45 crc kubenswrapper[4687]: E0314 09:27:45.744644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:51 crc kubenswrapper[4687]: I0314 09:27:51.737755 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:27:51 crc kubenswrapper[4687]: E0314 09:27:51.740300 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:27:52 crc kubenswrapper[4687]: I0314 09:27:52.737263 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:27:52 crc kubenswrapper[4687]: E0314 09:27:52.737518 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:27:58 crc kubenswrapper[4687]: I0314 09:27:58.737136 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:27:58 crc kubenswrapper[4687]: E0314 09:27:58.737880 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:27:59 crc kubenswrapper[4687]: I0314 09:27:59.030391 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zq8ml"] Mar 14 09:27:59 crc kubenswrapper[4687]: I0314 09:27:59.039563 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zq8ml"] Mar 14 09:27:59 crc kubenswrapper[4687]: I0314 09:27:59.748124 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6cf348-a162-431d-9399-f350f28e5b2d" path="/var/lib/kubelet/pods/9b6cf348-a162-431d-9399-f350f28e5b2d/volumes" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.144161 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558008-w9s8f"] Mar 14 09:28:00 crc kubenswrapper[4687]: E0314 09:28:00.144628 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.144649 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.144898 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.145678 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.149480 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.150003 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.150099 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.157146 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-w9s8f"] Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.232383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p472\" (UniqueName: \"kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472\") pod \"auto-csr-approver-29558008-w9s8f\" (UID: \"792e718d-3161-4cbc-989b-5abf93fd869e\") " pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.334804 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p472\" (UniqueName: \"kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472\") pod \"auto-csr-approver-29558008-w9s8f\" (UID: \"792e718d-3161-4cbc-989b-5abf93fd869e\") " pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.357320 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p472\" (UniqueName: \"kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472\") pod \"auto-csr-approver-29558008-w9s8f\" (UID: \"792e718d-3161-4cbc-989b-5abf93fd869e\") " pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.468808 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.942926 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-w9s8f"] Mar 14 09:28:00 crc kubenswrapper[4687]: I0314 09:28:00.951363 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:28:01 crc kubenswrapper[4687]: I0314 09:28:01.765412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" event={"ID":"792e718d-3161-4cbc-989b-5abf93fd869e","Type":"ContainerStarted","Data":"639b8acee06e6a62768e1df276199753654392c6b364a85bb3267e4cae02ebb3"} Mar 14 09:28:02 crc kubenswrapper[4687]: I0314 09:28:02.776004 4687 generic.go:334] "Generic (PLEG): container finished" podID="792e718d-3161-4cbc-989b-5abf93fd869e" containerID="497e33fa5ba63e399c48d733ec0be8f8b3aa62f73ea89fdebbd994cccc2f58f2" exitCode=0 Mar 14 09:28:02 crc kubenswrapper[4687]: I0314 09:28:02.776074 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" event={"ID":"792e718d-3161-4cbc-989b-5abf93fd869e","Type":"ContainerDied","Data":"497e33fa5ba63e399c48d733ec0be8f8b3aa62f73ea89fdebbd994cccc2f58f2"} Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.041298 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k84m2"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.054409 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b81a-account-create-update-ktznw"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.066624 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sz9dv"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.076885 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b81a-account-create-update-ktznw"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.088704 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k84m2"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.097858 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sz9dv"] Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.737505 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:28:03 crc kubenswrapper[4687]: E0314 09:28:03.737739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.750677 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3" path="/var/lib/kubelet/pods/33f0f47d-cd3e-44b2-9d92-9d1ed20f25a3/volumes" Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.751452 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2" path="/var/lib/kubelet/pods/41a3d5fd-5e08-4f29-8d75-2f1c27d24fa2/volumes" Mar 14 09:28:03 crc kubenswrapper[4687]: I0314 09:28:03.752182 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1cb2ae-49b5-424e-ba93-f222dab8b4cb" path="/var/lib/kubelet/pods/5c1cb2ae-49b5-424e-ba93-f222dab8b4cb/volumes" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.038942 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ed35-account-create-update-hlqxb"] Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.048667 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ed35-account-create-update-hlqxb"] Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.141365 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.210778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p472\" (UniqueName: \"kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472\") pod \"792e718d-3161-4cbc-989b-5abf93fd869e\" (UID: \"792e718d-3161-4cbc-989b-5abf93fd869e\") " Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.216780 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472" (OuterVolumeSpecName: "kube-api-access-7p472") pod "792e718d-3161-4cbc-989b-5abf93fd869e" (UID: "792e718d-3161-4cbc-989b-5abf93fd869e"). InnerVolumeSpecName "kube-api-access-7p472". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.313831 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p472\" (UniqueName: \"kubernetes.io/projected/792e718d-3161-4cbc-989b-5abf93fd869e-kube-api-access-7p472\") on node \"crc\" DevicePath \"\"" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.737044 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:28:04 crc kubenswrapper[4687]: E0314 09:28:04.737309 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.797214 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" event={"ID":"792e718d-3161-4cbc-989b-5abf93fd869e","Type":"ContainerDied","Data":"639b8acee06e6a62768e1df276199753654392c6b364a85bb3267e4cae02ebb3"} Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.797256 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639b8acee06e6a62768e1df276199753654392c6b364a85bb3267e4cae02ebb3" Mar 14 09:28:04 crc kubenswrapper[4687]: I0314 09:28:04.797264 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-w9s8f" Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.030590 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f02-account-create-update-w9xnm"] Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.040197 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f02-account-create-update-w9xnm"] Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.214776 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-8zs9t"] Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.218245 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-8zs9t"] Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.758401 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b0923a-08cb-49cb-b41e-7f1803315089" path="/var/lib/kubelet/pods/a5b0923a-08cb-49cb-b41e-7f1803315089/volumes" Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.759104 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe27ce9-7c2d-4c53-a5ac-9130c2f6d222" path="/var/lib/kubelet/pods/abe27ce9-7c2d-4c53-a5ac-9130c2f6d222/volumes" Mar 14 09:28:05 crc kubenswrapper[4687]: I0314 09:28:05.759616 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e440bf-b163-4125-bfe0-4d2d32cfb47d" path="/var/lib/kubelet/pods/d9e440bf-b163-4125-bfe0-4d2d32cfb47d/volumes" Mar 14 09:28:12 crc kubenswrapper[4687]: I0314 09:28:12.029377 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rm6zs"] Mar 14 09:28:12 crc kubenswrapper[4687]: I0314 09:28:12.038317 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rm6zs"] Mar 14 09:28:13 crc kubenswrapper[4687]: I0314 09:28:13.737164 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:28:13 crc kubenswrapper[4687]: E0314 09:28:13.737468 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:28:13 crc kubenswrapper[4687]: I0314 09:28:13.752639 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89dd59a1-f483-40d9-8153-d66e9bb13477" path="/var/lib/kubelet/pods/89dd59a1-f483-40d9-8153-d66e9bb13477/volumes" Mar 14 09:28:15 crc kubenswrapper[4687]: I0314 09:28:15.744539 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:28:15 crc kubenswrapper[4687]: E0314 09:28:15.746082 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:28:15 crc kubenswrapper[4687]: I0314 09:28:15.746394 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:28:15 crc kubenswrapper[4687]: E0314 09:28:15.746695 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:28:25 crc kubenswrapper[4687]: I0314 09:28:25.747021 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:28:25 crc kubenswrapper[4687]: E0314 09:28:25.747811 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.063387 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-srrwn"] Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.098592 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-srrwn"] Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.865297 4687 scope.go:117] "RemoveContainer" containerID="3f53cda01fc6889f65d9dbe00f63211869062cf7ffe9f71a243691ff174a3f6a" Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.886830 4687 scope.go:117] "RemoveContainer" containerID="71ef6807eeeb630705627bd9dde7ff2f8f22d77f27c08a50797bf38c368c2f87" Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.919839 4687 scope.go:117] "RemoveContainer" containerID="99ff8611f2bc5d219b7af1496417cc53462f6d93954e98a39f6db0637fc9b7a5" Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.955242 4687 scope.go:117] "RemoveContainer" containerID="835eed39430896719b6e962315eabb056eb265478c3ee057960d0a9e60307ac6" Mar 14 09:28:26 crc kubenswrapper[4687]: I0314 09:28:26.974056 4687 scope.go:117] "RemoveContainer" containerID="f3877d4ac39e61968e147f16f3eca83a6892b4f7e656d7dce5b8a3abb2813663" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.022272 4687 scope.go:117] "RemoveContainer" containerID="1701ab5d5c42bc99f9e065a62efc087b6ae6a0317d65112bedbe57dc5adafd93" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.063508 4687 scope.go:117] "RemoveContainer" containerID="f067b316f6639f4dd3062d198ee320f6355194821d60a7513c4688b3e7a2e447" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.102799 4687 scope.go:117] "RemoveContainer" containerID="64b7d6d268ab745f4dca34ee046de5affcae1e7c34cf4bffb50bba13a5b2927e" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.159946 4687 scope.go:117] "RemoveContainer" containerID="71cefd35d3e4d7ec211457cc04a7b6881c985be6198d4f1049c62889876a4bc1" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.185222 4687 scope.go:117] "RemoveContainer" containerID="c5160babefb372410068b2ebbed521d748cc6f3cdc5d769e07cfccde8d39c547" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.208776 4687 scope.go:117] "RemoveContainer" containerID="385e1ba082f845a19209eb2104641fab1b2134e01e35b586dd3848fe41063f8b" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.235428 4687 scope.go:117] "RemoveContainer" containerID="6cf2456c817df74ab9a1300cf2c8da5bfa1e0483684ddb06c9d9729455e09a9c" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.264734 4687 scope.go:117] "RemoveContainer" containerID="7d8fa70dd0f3b77557f22a6ec2a35c17e944e5b5531022d011158beef0729f72" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.285073 4687 scope.go:117] "RemoveContainer" containerID="89abd0d7a0606cc724b4dbeeb5ab2c49e74710ccdc1053ce8a8612e8a9918d64" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.304788 4687 scope.go:117] "RemoveContainer" containerID="fcc334b7a37a2c6bf9fd3ac83e82dd13aec88e772f2f063c59c351eccb014648" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.345940 4687 scope.go:117] "RemoveContainer" containerID="428583a947280295edd8d6c2e684589e749e0b1e4d9fe8e26d0616b323a5b673" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.369944 4687 scope.go:117] "RemoveContainer" containerID="bea946123e248cbd13aa174aee3307580b559be2796f899433851cd2698d8f06" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.389799 4687 scope.go:117] "RemoveContainer" containerID="af9ec2f08d83ffebd4c23940aaeb9417338a9281d7d5b917e7a6eeabbe7b52ed" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.410431 4687 scope.go:117] "RemoveContainer" containerID="725c97c9e4f381679c3c182441e0be4eebd851dd53d9cb2df9dfdf8e0d835615" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.428506 4687 scope.go:117] "RemoveContainer" containerID="02a288cec9d81e4b0ec8aa16be474f15bc7f6ff74d5b9f65fa4e18ca66f82873" Mar 14 09:28:27 crc kubenswrapper[4687]: I0314 09:28:27.747959 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2204f87-28ac-4294-b695-a189cbf15782" path="/var/lib/kubelet/pods/d2204f87-28ac-4294-b695-a189cbf15782/volumes" Mar 14 09:28:28 crc kubenswrapper[4687]: I0314 09:28:28.027689 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sxhld"] Mar 14 09:28:28 crc kubenswrapper[4687]: I0314 09:28:28.035629 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sxhld"] Mar 14 09:28:28 crc kubenswrapper[4687]: I0314 09:28:28.737402 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:28:28 crc kubenswrapper[4687]: E0314 09:28:28.737632 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:28:29 crc kubenswrapper[4687]: I0314 09:28:29.737976 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:28:29 crc kubenswrapper[4687]: E0314 09:28:29.738927 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:28:29 crc kubenswrapper[4687]: I0314 09:28:29.762230 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3748411-81a9-4a0d-b7f0-a32f77b42c48" path="/var/lib/kubelet/pods/a3748411-81a9-4a0d-b7f0-a32f77b42c48/volumes" Mar 14 09:28:33 crc kubenswrapper[4687]: I0314 09:28:33.036183 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wztwf"] Mar 14 09:28:33 crc kubenswrapper[4687]: I0314 09:28:33.044121 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wztwf"] Mar 14 09:28:33 crc kubenswrapper[4687]: I0314 09:28:33.748652 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ba5c4b-9a2a-43ce-a6c0-f3488284929c" path="/var/lib/kubelet/pods/38ba5c4b-9a2a-43ce-a6c0-f3488284929c/volumes" Mar 14 09:28:37 crc kubenswrapper[4687]: I0314 09:28:37.738075 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:28:37 crc kubenswrapper[4687]: E0314 09:28:37.739033 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:28:39 crc kubenswrapper[4687]: I0314 09:28:39.737326 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:28:39 crc kubenswrapper[4687]: E0314 09:28:39.737975 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:28:42 crc kubenswrapper[4687]: I0314 09:28:42.737482 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:28:42 crc kubenswrapper[4687]: E0314 09:28:42.737741 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:28:49 crc kubenswrapper[4687]: I0314 09:28:49.736782 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:28:49 crc kubenswrapper[4687]: E0314 09:28:49.737521 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:28:50 crc kubenswrapper[4687]: I0314 09:28:50.737619 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:28:50 crc kubenswrapper[4687]: E0314 09:28:50.737847 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:28:53 crc kubenswrapper[4687]: I0314 09:28:53.737211 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:28:53 crc kubenswrapper[4687]: E0314 09:28:53.738118 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:01 crc kubenswrapper[4687]: I0314 09:29:01.737251 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:29:02 crc kubenswrapper[4687]: I0314 09:29:02.401931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3"} Mar 14 09:29:04 crc kubenswrapper[4687]: I0314 09:29:04.737783 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:29:04 crc kubenswrapper[4687]: E0314 09:29:04.738619 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:29:06 crc kubenswrapper[4687]: I0314 09:29:06.736632 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:29:06 crc kubenswrapper[4687]: E0314 09:29:06.737053 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:18 crc kubenswrapper[4687]: I0314 09:29:18.738174 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:29:18 crc kubenswrapper[4687]: I0314 09:29:18.738741 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:29:18 crc kubenswrapper[4687]: E0314 09:29:18.738897 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:18 crc kubenswrapper[4687]: E0314 09:29:18.739178 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:29:21 crc kubenswrapper[4687]: I0314 09:29:21.046845 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-224sh"] Mar 14 09:29:21 crc kubenswrapper[4687]: I0314 09:29:21.065003 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-224sh"] Mar 14 09:29:21 crc kubenswrapper[4687]: I0314 09:29:21.753219 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e40d20-4fba-44d2-b6f9-ce8c2ac65e88" path="/var/lib/kubelet/pods/82e40d20-4fba-44d2-b6f9-ce8c2ac65e88/volumes" Mar 14 09:29:23 crc kubenswrapper[4687]: I0314 09:29:23.031306 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jb4md"] Mar 14 09:29:23 crc kubenswrapper[4687]: I0314 09:29:23.044424 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jb4md"] Mar 14 09:29:23 crc kubenswrapper[4687]: I0314 09:29:23.750989 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9a5d82-1869-4fbd-924a-12451d765558" path="/var/lib/kubelet/pods/5c9a5d82-1869-4fbd-924a-12451d765558/volumes" Mar 14 09:29:27 crc kubenswrapper[4687]: I0314 09:29:27.727322 4687 scope.go:117] "RemoveContainer" containerID="2e8b97f43d874ea3f51915e5fb56d0a2ab59daae20c06d9af5c184ae558e0773" Mar 14 09:29:27 crc kubenswrapper[4687]: I0314 09:29:27.767070 4687 scope.go:117] "RemoveContainer" containerID="a69dd6b8fd4d105d4650ac42ce4aa18156d851af6acb83ed1d1698ee6d040eb5" Mar 14 09:29:27 crc kubenswrapper[4687]: I0314 09:29:27.828543 4687 scope.go:117] "RemoveContainer" containerID="ff49d2ca03080a4c372d210fefc20f47528c4ef51725f4c703ecd76d60f72ad5" Mar 14 09:29:27 crc kubenswrapper[4687]: I0314 09:29:27.870703 4687 scope.go:117] "RemoveContainer" containerID="35964c67454de0be181c6c4d08fa78944c6f263c2a376a577b5597ef99feec95" Mar 14 09:29:27 crc kubenswrapper[4687]: I0314 09:29:27.914100 4687 scope.go:117] "RemoveContainer" containerID="32cf1cac59e97922186d470fecf0224b0039b5596ebfc945daa18feecf39af21" Mar 14 09:29:30 crc kubenswrapper[4687]: I0314 09:29:30.737966 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:29:30 crc kubenswrapper[4687]: E0314 09:29:30.738567 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:31 crc kubenswrapper[4687]: I0314 09:29:31.737881 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:29:31 crc kubenswrapper[4687]: E0314 09:29:31.738106 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:29:43 crc kubenswrapper[4687]: I0314 09:29:43.738044 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:29:43 crc kubenswrapper[4687]: E0314 09:29:43.738971 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:44 crc kubenswrapper[4687]: I0314 09:29:44.736465 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:29:44 crc kubenswrapper[4687]: E0314 09:29:44.736652 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:29:47 crc kubenswrapper[4687]: I0314 09:29:47.041165 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-q2lvm"] Mar 14 09:29:47 crc kubenswrapper[4687]: I0314 09:29:47.049175 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-q2lvm"] Mar 14 09:29:47 crc kubenswrapper[4687]: I0314 09:29:47.755843 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c55cdca-7409-4935-8192-c4195a654a45" path="/var/lib/kubelet/pods/0c55cdca-7409-4935-8192-c4195a654a45/volumes" Mar 14 09:29:48 crc kubenswrapper[4687]: I0314 09:29:48.025478 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hsvt5"] Mar 14 09:29:48 crc kubenswrapper[4687]: I0314 09:29:48.033445 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hsvt5"] Mar 14 09:29:49 crc kubenswrapper[4687]: I0314 09:29:49.748762 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffe58c5-8c6d-4c28-9379-3e08e365adef" path="/var/lib/kubelet/pods/1ffe58c5-8c6d-4c28-9379-3e08e365adef/volumes" Mar 14 09:29:55 crc kubenswrapper[4687]: I0314 09:29:55.025890 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-k6kmw"] Mar 14 09:29:55 crc kubenswrapper[4687]: I0314 09:29:55.039278 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-k6kmw"] Mar 14 09:29:55 crc kubenswrapper[4687]: I0314 09:29:55.742759 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:29:55 crc kubenswrapper[4687]: I0314 09:29:55.742957 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:29:55 crc kubenswrapper[4687]: E0314 09:29:55.743029 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:29:55 crc kubenswrapper[4687]: E0314 09:29:55.743225 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:29:55 crc kubenswrapper[4687]: I0314 09:29:55.749131 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21832052-3293-4320-aed2-58a020acb502" path="/var/lib/kubelet/pods/21832052-3293-4320-aed2-58a020acb502/volumes" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.150642 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7"] Mar 14 09:30:00 crc kubenswrapper[4687]: E0314 09:30:00.151638 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792e718d-3161-4cbc-989b-5abf93fd869e" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.151654 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="792e718d-3161-4cbc-989b-5abf93fd869e" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.151907 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="792e718d-3161-4cbc-989b-5abf93fd869e" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.152753 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.155290 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.155609 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.164769 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558010-9t6vp"] Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.167350 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.169661 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.170648 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.172533 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.180242 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7"] Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.189513 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-9t6vp"] Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.239775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbmcc\" (UniqueName: \"kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.239866 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.240142 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.240413 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zn66\" (UniqueName: \"kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66\") pod \"auto-csr-approver-29558010-9t6vp\" (UID: \"c3a60f94-49bb-4245-8704-a041c5f38c2e\") " pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.303102 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.307044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.317187 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.346529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.346662 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zn66\" (UniqueName: \"kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66\") pod \"auto-csr-approver-29558010-9t6vp\" (UID: \"c3a60f94-49bb-4245-8704-a041c5f38c2e\") " pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.346749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbmcc\" (UniqueName: \"kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.346786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.347767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.369671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.372075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zn66\" (UniqueName: \"kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66\") pod \"auto-csr-approver-29558010-9t6vp\" (UID: \"c3a60f94-49bb-4245-8704-a041c5f38c2e\") " pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.372246 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbmcc\" (UniqueName: \"kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc\") pod \"collect-profiles-29558010-xkqf7\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.449095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.449153 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.449175 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklcr\" (UniqueName: \"kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.484515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.496798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.550820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.550867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklcr\" (UniqueName: \"kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.551039 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.551472 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.551713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.571187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklcr\" (UniqueName: \"kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr\") pod \"redhat-operators-x2c6c\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:00 crc kubenswrapper[4687]: I0314 09:30:00.628144 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.010045 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7"] Mar 14 09:30:01 crc kubenswrapper[4687]: W0314 09:30:01.020093 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod857aaacc_2144_4712_aa3d_d0e5198b96ca.slice/crio-fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216 WatchSource:0}: Error finding container fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216: Status 404 returned error can't find the container with id fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216 Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.022405 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-9t6vp"] Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.191649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:01 crc kubenswrapper[4687]: W0314 09:30:01.198455 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c2cfb5_2e77_4e06_99fd_fdf10b136ec2.slice/crio-9e5944d524df31bc87530d454d1478f43585b5c67d08e8075880550825832ae8 WatchSource:0}: Error finding container 9e5944d524df31bc87530d454d1478f43585b5c67d08e8075880550825832ae8: Status 404 returned error can't find the container with id 9e5944d524df31bc87530d454d1478f43585b5c67d08e8075880550825832ae8 Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.924257 4687 generic.go:334] "Generic (PLEG): container finished" podID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerID="992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18" exitCode=0 Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.924324 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerDied","Data":"992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18"} Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.924627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerStarted","Data":"9e5944d524df31bc87530d454d1478f43585b5c67d08e8075880550825832ae8"} Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.926301 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" event={"ID":"c3a60f94-49bb-4245-8704-a041c5f38c2e","Type":"ContainerStarted","Data":"c6a01705adfdfb4e04991bcb540c8fd444df108de5d2ea87d924fe05bd83812c"} Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.929050 4687 generic.go:334] "Generic (PLEG): container finished" podID="857aaacc-2144-4712-aa3d-d0e5198b96ca" containerID="dde34f00655c7c50a9c978cb3ee6ca5a1af90a99d8b129f5739bc565ebb07f41" exitCode=0 Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.929105 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" event={"ID":"857aaacc-2144-4712-aa3d-d0e5198b96ca","Type":"ContainerDied","Data":"dde34f00655c7c50a9c978cb3ee6ca5a1af90a99d8b129f5739bc565ebb07f41"} Mar 14 09:30:01 crc kubenswrapper[4687]: I0314 09:30:01.929175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" event={"ID":"857aaacc-2144-4712-aa3d-d0e5198b96ca","Type":"ContainerStarted","Data":"fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216"} Mar 14 09:30:02 crc kubenswrapper[4687]: I0314 09:30:02.939407 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3a60f94-49bb-4245-8704-a041c5f38c2e" containerID="13935bcc5af7b83625fcf8c37bbe3c3c3658c05e020d63c5187832ed3c2529bc" exitCode=0 Mar 14 09:30:02 crc kubenswrapper[4687]: I0314 09:30:02.939616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" event={"ID":"c3a60f94-49bb-4245-8704-a041c5f38c2e","Type":"ContainerDied","Data":"13935bcc5af7b83625fcf8c37bbe3c3c3658c05e020d63c5187832ed3c2529bc"} Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.292300 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.404455 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume\") pod \"857aaacc-2144-4712-aa3d-d0e5198b96ca\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.404519 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume\") pod \"857aaacc-2144-4712-aa3d-d0e5198b96ca\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.404557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbmcc\" (UniqueName: \"kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc\") pod \"857aaacc-2144-4712-aa3d-d0e5198b96ca\" (UID: \"857aaacc-2144-4712-aa3d-d0e5198b96ca\") " Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.405249 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "857aaacc-2144-4712-aa3d-d0e5198b96ca" (UID: "857aaacc-2144-4712-aa3d-d0e5198b96ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.411657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "857aaacc-2144-4712-aa3d-d0e5198b96ca" (UID: "857aaacc-2144-4712-aa3d-d0e5198b96ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.411719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc" (OuterVolumeSpecName: "kube-api-access-nbmcc") pod "857aaacc-2144-4712-aa3d-d0e5198b96ca" (UID: "857aaacc-2144-4712-aa3d-d0e5198b96ca"). InnerVolumeSpecName "kube-api-access-nbmcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.506808 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/857aaacc-2144-4712-aa3d-d0e5198b96ca-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.506841 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/857aaacc-2144-4712-aa3d-d0e5198b96ca-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.506854 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbmcc\" (UniqueName: \"kubernetes.io/projected/857aaacc-2144-4712-aa3d-d0e5198b96ca-kube-api-access-nbmcc\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.949864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerStarted","Data":"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c"} Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.951630 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.951627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7" event={"ID":"857aaacc-2144-4712-aa3d-d0e5198b96ca","Type":"ContainerDied","Data":"fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216"} Mar 14 09:30:03 crc kubenswrapper[4687]: I0314 09:30:03.951831 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc664679050ce8eeccc00be32f07071f5cb6b390384b98488c8a5d729220f216" Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.300152 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.422476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zn66\" (UniqueName: \"kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66\") pod \"c3a60f94-49bb-4245-8704-a041c5f38c2e\" (UID: \"c3a60f94-49bb-4245-8704-a041c5f38c2e\") " Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.426363 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66" (OuterVolumeSpecName: "kube-api-access-2zn66") pod "c3a60f94-49bb-4245-8704-a041c5f38c2e" (UID: "c3a60f94-49bb-4245-8704-a041c5f38c2e"). InnerVolumeSpecName "kube-api-access-2zn66". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.525129 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zn66\" (UniqueName: \"kubernetes.io/projected/c3a60f94-49bb-4245-8704-a041c5f38c2e-kube-api-access-2zn66\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.965482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" event={"ID":"c3a60f94-49bb-4245-8704-a041c5f38c2e","Type":"ContainerDied","Data":"c6a01705adfdfb4e04991bcb540c8fd444df108de5d2ea87d924fe05bd83812c"} Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.965522 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6a01705adfdfb4e04991bcb540c8fd444df108de5d2ea87d924fe05bd83812c" Mar 14 09:30:04 crc kubenswrapper[4687]: I0314 09:30:04.965533 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-9t6vp" Mar 14 09:30:05 crc kubenswrapper[4687]: I0314 09:30:05.371517 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-q8g5l"] Mar 14 09:30:05 crc kubenswrapper[4687]: I0314 09:30:05.379703 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-q8g5l"] Mar 14 09:30:05 crc kubenswrapper[4687]: I0314 09:30:05.749116 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1581a888-afb5-495e-a425-77118d106d2a" path="/var/lib/kubelet/pods/1581a888-afb5-495e-a425-77118d106d2a/volumes" Mar 14 09:30:06 crc kubenswrapper[4687]: I0314 09:30:06.988372 4687 generic.go:334] "Generic (PLEG): container finished" podID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerID="5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c" exitCode=0 Mar 14 09:30:06 crc kubenswrapper[4687]: I0314 09:30:06.988446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerDied","Data":"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c"} Mar 14 09:30:07 crc kubenswrapper[4687]: I0314 09:30:07.736981 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:30:07 crc kubenswrapper[4687]: E0314 09:30:07.737290 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:30:07 crc kubenswrapper[4687]: I0314 09:30:07.999414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerStarted","Data":"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141"} Mar 14 09:30:08 crc kubenswrapper[4687]: I0314 09:30:08.021587 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2c6c" podStartSLOduration=2.551891649 podStartE2EDuration="8.021564449s" podCreationTimestamp="2026-03-14 09:30:00 +0000 UTC" firstStartedPulling="2026-03-14 09:30:01.926304119 +0000 UTC m=+1986.914544494" lastFinishedPulling="2026-03-14 09:30:07.395976919 +0000 UTC m=+1992.384217294" observedRunningTime="2026-03-14 09:30:08.016203707 +0000 UTC m=+1993.004444082" watchObservedRunningTime="2026-03-14 09:30:08.021564449 +0000 UTC m=+1993.009804824" Mar 14 09:30:10 crc kubenswrapper[4687]: I0314 09:30:10.629549 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:10 crc kubenswrapper[4687]: I0314 09:30:10.629903 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:10 crc kubenswrapper[4687]: I0314 09:30:10.737615 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:30:10 crc kubenswrapper[4687]: E0314 09:30:10.738042 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:30:11 crc kubenswrapper[4687]: I0314 09:30:11.679014 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x2c6c" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="registry-server" probeResult="failure" output=< Mar 14 09:30:11 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:30:11 crc kubenswrapper[4687]: > Mar 14 09:30:18 crc kubenswrapper[4687]: I0314 09:30:18.737542 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:30:18 crc kubenswrapper[4687]: E0314 09:30:18.738505 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:30:20 crc kubenswrapper[4687]: I0314 09:30:20.685288 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:20 crc kubenswrapper[4687]: I0314 09:30:20.731755 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:20 crc kubenswrapper[4687]: I0314 09:30:20.919637 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:21 crc kubenswrapper[4687]: I0314 09:30:21.737703 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:30:21 crc kubenswrapper[4687]: E0314 09:30:21.738204 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.151911 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2c6c" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="registry-server" containerID="cri-o://08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141" gracePeriod=2 Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.584715 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.683165 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities\") pod \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.683249 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content\") pod \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.683505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lklcr\" (UniqueName: \"kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr\") pod \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\" (UID: \"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2\") " Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.684007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities" (OuterVolumeSpecName: "utilities") pod "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" (UID: "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.684134 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.688852 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr" (OuterVolumeSpecName: "kube-api-access-lklcr") pod "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" (UID: "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2"). InnerVolumeSpecName "kube-api-access-lklcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.785543 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lklcr\" (UniqueName: \"kubernetes.io/projected/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-kube-api-access-lklcr\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.815051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" (UID: "27c2cfb5-2e77-4e06-99fd-fdf10b136ec2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:30:22 crc kubenswrapper[4687]: I0314 09:30:22.887762 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.162936 4687 generic.go:334] "Generic (PLEG): container finished" podID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerID="08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141" exitCode=0 Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.162984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerDied","Data":"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141"} Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.163006 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2c6c" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.163017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2c6c" event={"ID":"27c2cfb5-2e77-4e06-99fd-fdf10b136ec2","Type":"ContainerDied","Data":"9e5944d524df31bc87530d454d1478f43585b5c67d08e8075880550825832ae8"} Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.163038 4687 scope.go:117] "RemoveContainer" containerID="08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.188653 4687 scope.go:117] "RemoveContainer" containerID="5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.200468 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.206816 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2c6c"] Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.208946 4687 scope.go:117] "RemoveContainer" containerID="992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.253826 4687 scope.go:117] "RemoveContainer" containerID="08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141" Mar 14 09:30:23 crc kubenswrapper[4687]: E0314 09:30:23.254422 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141\": container with ID starting with 08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141 not found: ID does not exist" containerID="08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.254451 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141"} err="failed to get container status \"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141\": rpc error: code = NotFound desc = could not find container \"08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141\": container with ID starting with 08c6409b04b8fee38696694eb1836b2eb93e011f72aaac9629a7429406dff141 not found: ID does not exist" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.254470 4687 scope.go:117] "RemoveContainer" containerID="5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c" Mar 14 09:30:23 crc kubenswrapper[4687]: E0314 09:30:23.254799 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c\": container with ID starting with 5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c not found: ID does not exist" containerID="5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.254897 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c"} err="failed to get container status \"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c\": rpc error: code = NotFound desc = could not find container \"5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c\": container with ID starting with 5d58ce16b7d8a962ccf310f175d089647d8a169928d1a7baad0b5132a52edd5c not found: ID does not exist" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.255004 4687 scope.go:117] "RemoveContainer" containerID="992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18" Mar 14 09:30:23 crc kubenswrapper[4687]: E0314 09:30:23.255661 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18\": container with ID starting with 992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18 not found: ID does not exist" containerID="992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.255689 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18"} err="failed to get container status \"992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18\": rpc error: code = NotFound desc = could not find container \"992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18\": container with ID starting with 992d30774c5f28fe308789f59a7ecc4dec1c42e3e0a426812573e361021e0a18 not found: ID does not exist" Mar 14 09:30:23 crc kubenswrapper[4687]: I0314 09:30:23.746875 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" path="/var/lib/kubelet/pods/27c2cfb5-2e77-4e06-99fd-fdf10b136ec2/volumes" Mar 14 09:30:28 crc kubenswrapper[4687]: I0314 09:30:28.031186 4687 scope.go:117] "RemoveContainer" containerID="1798aa60f8067f36800b36906cbd2bf97ff98e72ce95873d2af8ee6339e7cbea" Mar 14 09:30:28 crc kubenswrapper[4687]: I0314 09:30:28.058511 4687 scope.go:117] "RemoveContainer" containerID="93eb5b6111a0bc45abc0164906acf568235ca1195233623645c8313217837de5" Mar 14 09:30:28 crc kubenswrapper[4687]: I0314 09:30:28.126830 4687 scope.go:117] "RemoveContainer" containerID="0f93407d0853b70fd9f4576b8599566b7e84a8f11c512a577417c0b06f788d41" Mar 14 09:30:28 crc kubenswrapper[4687]: I0314 09:30:28.181547 4687 scope.go:117] "RemoveContainer" containerID="2a1ef1cd75684a1adea5392f9d083355e8093efd65c41c23db14ae43d773c3b6" Mar 14 09:30:29 crc kubenswrapper[4687]: I0314 09:30:29.737037 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:30:29 crc kubenswrapper[4687]: E0314 09:30:29.737621 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:30:34 crc kubenswrapper[4687]: I0314 09:30:34.737446 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:30:34 crc kubenswrapper[4687]: E0314 09:30:34.738218 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:30:40 crc kubenswrapper[4687]: I0314 09:30:40.737316 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:30:40 crc kubenswrapper[4687]: E0314 09:30:40.738028 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.047680 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mn87f"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.068126 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f136-account-create-update-xxfd6"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.077429 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2b6tg"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.085243 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ef00-account-create-update-smxvb"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.095319 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mxbml"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.102713 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mn87f"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.109554 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-08a7-account-create-update-drm7n"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.118393 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f136-account-create-update-xxfd6"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.126884 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2b6tg"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.134783 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mxbml"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.142210 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ef00-account-create-update-smxvb"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.149098 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-08a7-account-create-update-drm7n"] Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.746810 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369912db-6ae1-47e5-b92b-842730ab4379" path="/var/lib/kubelet/pods/369912db-6ae1-47e5-b92b-842730ab4379/volumes" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.747494 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6d8139-e819-4fb5-9b1a-e8903d4a0724" path="/var/lib/kubelet/pods/4e6d8139-e819-4fb5-9b1a-e8903d4a0724/volumes" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.748046 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9f1369-1580-409d-801b-8da0edcdbedc" path="/var/lib/kubelet/pods/7b9f1369-1580-409d-801b-8da0edcdbedc/volumes" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.748589 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b958e37-c13b-438b-8a4e-04750c0dafed" path="/var/lib/kubelet/pods/8b958e37-c13b-438b-8a4e-04750c0dafed/volumes" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.749555 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8524aad-2294-42c6-9b0e-3c2a9ad79954" path="/var/lib/kubelet/pods/c8524aad-2294-42c6-9b0e-3c2a9ad79954/volumes" Mar 14 09:30:43 crc kubenswrapper[4687]: I0314 09:30:43.750074 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db421c84-d478-4dfa-afad-36220ec7b430" path="/var/lib/kubelet/pods/db421c84-d478-4dfa-afad-36220ec7b430/volumes" Mar 14 09:30:47 crc kubenswrapper[4687]: I0314 09:30:47.737922 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:30:47 crc kubenswrapper[4687]: E0314 09:30:47.738790 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:30:54 crc kubenswrapper[4687]: I0314 09:30:54.738252 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:30:54 crc kubenswrapper[4687]: E0314 09:30:54.739015 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:01 crc kubenswrapper[4687]: I0314 09:31:01.737655 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:31:01 crc kubenswrapper[4687]: E0314 09:31:01.738377 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:31:08 crc kubenswrapper[4687]: I0314 09:31:08.736532 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:31:08 crc kubenswrapper[4687]: E0314 09:31:08.737312 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:14 crc kubenswrapper[4687]: I0314 09:31:14.737428 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:31:14 crc kubenswrapper[4687]: E0314 09:31:14.739464 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:31:22 crc kubenswrapper[4687]: I0314 09:31:22.737237 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:31:23 crc kubenswrapper[4687]: I0314 09:31:23.781376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08"} Mar 14 09:31:24 crc kubenswrapper[4687]: I0314 09:31:24.111921 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:31:24 crc kubenswrapper[4687]: I0314 09:31:24.111986 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.305417 4687 scope.go:117] "RemoveContainer" containerID="2de57c1c33971ae1c63054d7c456e7aa5c597861e1387055a73a6edae16cc777" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.340936 4687 scope.go:117] "RemoveContainer" containerID="09cfe380c40ddc5654448330795c9610cf63201eec25e0cbcdc1b9db6f7e8a4c" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.380907 4687 scope.go:117] "RemoveContainer" containerID="9868c41ed7e30d341a57b02894154b3c08e9139d69dd07636b5db4aa5d149082" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.430168 4687 scope.go:117] "RemoveContainer" containerID="1e0bd4a5d9201ebc188577ad9efe7ebedf717f56c96656c87d5ebfc71d146107" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.483187 4687 scope.go:117] "RemoveContainer" containerID="3d44038165bf2a85e3c5245fbe057fce19324262c24b75cdade37988e7487451" Mar 14 09:31:28 crc kubenswrapper[4687]: I0314 09:31:28.529458 4687 scope.go:117] "RemoveContainer" containerID="49e18307a31f8de78df2ed94f4f273d1cd4998a540c666b3a5eb860b6b48e526" Mar 14 09:31:29 crc kubenswrapper[4687]: I0314 09:31:29.740060 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:31:30 crc kubenswrapper[4687]: I0314 09:31:30.850299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086"} Mar 14 09:31:31 crc kubenswrapper[4687]: I0314 09:31:31.861920 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" exitCode=1 Mar 14 09:31:31 crc kubenswrapper[4687]: I0314 09:31:31.861995 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08"} Mar 14 09:31:31 crc kubenswrapper[4687]: I0314 09:31:31.862249 4687 scope.go:117] "RemoveContainer" containerID="41be175a72eada7e832202b93a3f85985a763262cfb4c8d89c5210bb11a85508" Mar 14 09:31:31 crc kubenswrapper[4687]: I0314 09:31:31.862933 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:31:31 crc kubenswrapper[4687]: E0314 09:31:31.863179 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.128555 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.129286 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.220043 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.220192 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.220284 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.220297 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:31:32 crc kubenswrapper[4687]: I0314 09:31:32.877251 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:31:32 crc kubenswrapper[4687]: E0314 09:31:32.877684 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:33 crc kubenswrapper[4687]: I0314 09:31:33.885390 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:31:33 crc kubenswrapper[4687]: E0314 09:31:33.885784 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:34 crc kubenswrapper[4687]: I0314 09:31:34.067155 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8t9sq"] Mar 14 09:31:34 crc kubenswrapper[4687]: I0314 09:31:34.074843 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8t9sq"] Mar 14 09:31:35 crc kubenswrapper[4687]: I0314 09:31:35.749465 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113a938c-1831-439a-ae3c-5fbf7abfbc81" path="/var/lib/kubelet/pods/113a938c-1831-439a-ae3c-5fbf7abfbc81/volumes" Mar 14 09:31:37 crc kubenswrapper[4687]: I0314 09:31:37.939888 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" exitCode=1 Mar 14 09:31:37 crc kubenswrapper[4687]: I0314 09:31:37.940464 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086"} Mar 14 09:31:37 crc kubenswrapper[4687]: I0314 09:31:37.940505 4687 scope.go:117] "RemoveContainer" containerID="1857179abc1135553e588aa89bebd29b1cd7416acc69db8636d0856bb87024c0" Mar 14 09:31:37 crc kubenswrapper[4687]: I0314 09:31:37.941561 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:31:37 crc kubenswrapper[4687]: E0314 09:31:37.941850 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:31:42 crc kubenswrapper[4687]: I0314 09:31:42.127635 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:31:42 crc kubenswrapper[4687]: I0314 09:31:42.128117 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:31:42 crc kubenswrapper[4687]: I0314 09:31:42.128906 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:31:42 crc kubenswrapper[4687]: E0314 09:31:42.129142 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:31:45 crc kubenswrapper[4687]: I0314 09:31:45.742292 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:31:45 crc kubenswrapper[4687]: E0314 09:31:45.742894 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:53 crc kubenswrapper[4687]: I0314 09:31:53.031304 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wnffc"] Mar 14 09:31:53 crc kubenswrapper[4687]: I0314 09:31:53.041222 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wnffc"] Mar 14 09:31:53 crc kubenswrapper[4687]: I0314 09:31:53.747115 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9465bf07-7fc2-49a3-bc32-d6958a605b98" path="/var/lib/kubelet/pods/9465bf07-7fc2-49a3-bc32-d6958a605b98/volumes" Mar 14 09:31:54 crc kubenswrapper[4687]: I0314 09:31:54.110998 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:31:54 crc kubenswrapper[4687]: I0314 09:31:54.111061 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:31:54 crc kubenswrapper[4687]: I0314 09:31:54.737738 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:31:54 crc kubenswrapper[4687]: E0314 09:31:54.738431 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:31:58 crc kubenswrapper[4687]: I0314 09:31:58.029887 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5b8w6"] Mar 14 09:31:58 crc kubenswrapper[4687]: I0314 09:31:58.040515 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5b8w6"] Mar 14 09:31:58 crc kubenswrapper[4687]: I0314 09:31:58.737547 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:31:58 crc kubenswrapper[4687]: E0314 09:31:58.737875 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:31:59 crc kubenswrapper[4687]: I0314 09:31:59.748018 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05cec2e-2f40-4dec-a3cd-7f3d7f54952f" path="/var/lib/kubelet/pods/d05cec2e-2f40-4dec-a3cd-7f3d7f54952f/volumes" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.145958 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558012-vdjt9"] Mar 14 09:32:00 crc kubenswrapper[4687]: E0314 09:32:00.146398 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="extract-utilities" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146415 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="extract-utilities" Mar 14 09:32:00 crc kubenswrapper[4687]: E0314 09:32:00.146429 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857aaacc-2144-4712-aa3d-d0e5198b96ca" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146435 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="857aaacc-2144-4712-aa3d-d0e5198b96ca" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4687]: E0314 09:32:00.146444 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="registry-server" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146451 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="registry-server" Mar 14 09:32:00 crc kubenswrapper[4687]: E0314 09:32:00.146469 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a60f94-49bb-4245-8704-a041c5f38c2e" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146475 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a60f94-49bb-4245-8704-a041c5f38c2e" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4687]: E0314 09:32:00.146498 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="extract-content" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146505 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="extract-content" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146677 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a60f94-49bb-4245-8704-a041c5f38c2e" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146702 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c2cfb5-2e77-4e06-99fd-fdf10b136ec2" containerName="registry-server" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.146710 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="857aaacc-2144-4712-aa3d-d0e5198b96ca" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.147421 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.149985 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.150099 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.154258 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.156565 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-vdjt9"] Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.347783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbcl\" (UniqueName: \"kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl\") pod \"auto-csr-approver-29558012-vdjt9\" (UID: \"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1\") " pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.449937 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbcl\" (UniqueName: \"kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl\") pod \"auto-csr-approver-29558012-vdjt9\" (UID: \"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1\") " pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.468524 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbcl\" (UniqueName: \"kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl\") pod \"auto-csr-approver-29558012-vdjt9\" (UID: \"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1\") " pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:00 crc kubenswrapper[4687]: I0314 09:32:00.766390 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:01 crc kubenswrapper[4687]: I0314 09:32:01.214533 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-vdjt9"] Mar 14 09:32:02 crc kubenswrapper[4687]: I0314 09:32:02.168029 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" event={"ID":"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1","Type":"ContainerStarted","Data":"da4023368613056dd8180db6f3250890e842df203ba54b73a17ea8f692e456a5"} Mar 14 09:32:03 crc kubenswrapper[4687]: I0314 09:32:03.178390 4687 generic.go:334] "Generic (PLEG): container finished" podID="e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" containerID="febc582d305c277456e25bf52527881a6f23fe81a888e48ff71ccdc4437fee2a" exitCode=0 Mar 14 09:32:03 crc kubenswrapper[4687]: I0314 09:32:03.178451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" event={"ID":"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1","Type":"ContainerDied","Data":"febc582d305c277456e25bf52527881a6f23fe81a888e48ff71ccdc4437fee2a"} Mar 14 09:32:04 crc kubenswrapper[4687]: I0314 09:32:04.499187 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:04 crc kubenswrapper[4687]: I0314 09:32:04.639537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbcl\" (UniqueName: \"kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl\") pod \"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1\" (UID: \"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1\") " Mar 14 09:32:04 crc kubenswrapper[4687]: I0314 09:32:04.645029 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl" (OuterVolumeSpecName: "kube-api-access-zrbcl") pod "e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" (UID: "e4c895e0-3e7e-4b68-976e-aad54eb4bcd1"). InnerVolumeSpecName "kube-api-access-zrbcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:32:04 crc kubenswrapper[4687]: I0314 09:32:04.743488 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbcl\" (UniqueName: \"kubernetes.io/projected/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1-kube-api-access-zrbcl\") on node \"crc\" DevicePath \"\"" Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.197230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" event={"ID":"e4c895e0-3e7e-4b68-976e-aad54eb4bcd1","Type":"ContainerDied","Data":"da4023368613056dd8180db6f3250890e842df203ba54b73a17ea8f692e456a5"} Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.197268 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4023368613056dd8180db6f3250890e842df203ba54b73a17ea8f692e456a5" Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.197275 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-vdjt9" Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.562238 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-pf6q4"] Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.571495 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-pf6q4"] Mar 14 09:32:05 crc kubenswrapper[4687]: I0314 09:32:05.748218 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a14e0f7-3af5-4c63-91d1-e9c50ab5a226" path="/var/lib/kubelet/pods/0a14e0f7-3af5-4c63-91d1-e9c50ab5a226/volumes" Mar 14 09:32:09 crc kubenswrapper[4687]: I0314 09:32:09.737830 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:32:09 crc kubenswrapper[4687]: E0314 09:32:09.738395 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:32:10 crc kubenswrapper[4687]: I0314 09:32:10.746833 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:32:10 crc kubenswrapper[4687]: E0314 09:32:10.747589 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:32:21 crc kubenswrapper[4687]: I0314 09:32:21.737666 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:32:21 crc kubenswrapper[4687]: E0314 09:32:21.738674 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:32:22 crc kubenswrapper[4687]: I0314 09:32:22.737473 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:32:22 crc kubenswrapper[4687]: E0314 09:32:22.737840 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.111199 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.111545 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.111590 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.112302 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.112368 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3" gracePeriod=600 Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.368064 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3" exitCode=0 Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.368606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3"} Mar 14 09:32:24 crc kubenswrapper[4687]: I0314 09:32:24.368719 4687 scope.go:117] "RemoveContainer" containerID="02f0229cd246d7e0b9dfc9c125c9c8d803d0d2eadf3c5f440ef69e1d9783fade" Mar 14 09:32:25 crc kubenswrapper[4687]: I0314 09:32:25.379003 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267"} Mar 14 09:32:28 crc kubenswrapper[4687]: I0314 09:32:28.678854 4687 scope.go:117] "RemoveContainer" containerID="f199f955345f7a9c262e01cebbf73127f4af3751df881bef1a1ad2ff1d037309" Mar 14 09:32:28 crc kubenswrapper[4687]: I0314 09:32:28.753625 4687 scope.go:117] "RemoveContainer" containerID="aa83c67d87b241aaaaa15361432b0c804b09b87482560e9f335d4945124d06c5" Mar 14 09:32:28 crc kubenswrapper[4687]: I0314 09:32:28.797189 4687 scope.go:117] "RemoveContainer" containerID="29fd10a2f6f3e52fbba20c97885ce9e71a966c3cb04b637a92b060ab186daf33" Mar 14 09:32:28 crc kubenswrapper[4687]: I0314 09:32:28.834690 4687 scope.go:117] "RemoveContainer" containerID="1dd55d4bbcaf7b020c3d6ee18240e52dd4d9d90a74197a969a60516e5ad3f343" Mar 14 09:32:33 crc kubenswrapper[4687]: I0314 09:32:33.738013 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:32:33 crc kubenswrapper[4687]: I0314 09:32:33.738749 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:32:33 crc kubenswrapper[4687]: E0314 09:32:33.738889 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:32:33 crc kubenswrapper[4687]: E0314 09:32:33.739579 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:32:38 crc kubenswrapper[4687]: I0314 09:32:38.039379 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qg67"] Mar 14 09:32:38 crc kubenswrapper[4687]: I0314 09:32:38.049227 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qg67"] Mar 14 09:32:39 crc kubenswrapper[4687]: I0314 09:32:39.750039 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c77e04c-9666-4578-bc3f-8d91d72ae5d0" path="/var/lib/kubelet/pods/3c77e04c-9666-4578-bc3f-8d91d72ae5d0/volumes" Mar 14 09:32:45 crc kubenswrapper[4687]: I0314 09:32:45.745091 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:32:45 crc kubenswrapper[4687]: I0314 09:32:45.745808 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:32:45 crc kubenswrapper[4687]: E0314 09:32:45.746173 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:32:45 crc kubenswrapper[4687]: E0314 09:32:45.746183 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:32:57 crc kubenswrapper[4687]: I0314 09:32:57.737848 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:32:57 crc kubenswrapper[4687]: E0314 09:32:57.738766 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:33:00 crc kubenswrapper[4687]: I0314 09:33:00.737748 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:33:00 crc kubenswrapper[4687]: E0314 09:33:00.738180 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:33:10 crc kubenswrapper[4687]: I0314 09:33:10.737043 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:33:10 crc kubenswrapper[4687]: E0314 09:33:10.738559 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:33:12 crc kubenswrapper[4687]: I0314 09:33:12.737377 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:33:12 crc kubenswrapper[4687]: E0314 09:33:12.737977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:33:21 crc kubenswrapper[4687]: I0314 09:33:21.737726 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:33:21 crc kubenswrapper[4687]: E0314 09:33:21.738415 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:33:27 crc kubenswrapper[4687]: I0314 09:33:27.737252 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:33:27 crc kubenswrapper[4687]: E0314 09:33:27.738269 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:33:28 crc kubenswrapper[4687]: I0314 09:33:28.959082 4687 scope.go:117] "RemoveContainer" containerID="e3f5d9cef3f8e241906763fc010a75bf2b6ff84393fc24c9878ac8e3ec288a7c" Mar 14 09:33:34 crc kubenswrapper[4687]: I0314 09:33:34.737973 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:33:34 crc kubenswrapper[4687]: E0314 09:33:34.738831 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:33:42 crc kubenswrapper[4687]: I0314 09:33:42.737794 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:33:42 crc kubenswrapper[4687]: E0314 09:33:42.738638 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:33:48 crc kubenswrapper[4687]: I0314 09:33:48.737476 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:33:48 crc kubenswrapper[4687]: E0314 09:33:48.738529 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:33:54 crc kubenswrapper[4687]: I0314 09:33:54.737005 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:33:54 crc kubenswrapper[4687]: E0314 09:33:54.737728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.148783 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558014-wbh8h"] Mar 14 09:34:00 crc kubenswrapper[4687]: E0314 09:34:00.149959 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" containerName="oc" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.149974 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" containerName="oc" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.150165 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" containerName="oc" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.150934 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.154287 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.154411 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.154662 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.158207 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-wbh8h"] Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.265918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtgl\" (UniqueName: \"kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl\") pod \"auto-csr-approver-29558014-wbh8h\" (UID: \"69eb7aae-13a1-49a1-8446-646a2560a5cd\") " pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.368006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtgl\" (UniqueName: \"kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl\") pod \"auto-csr-approver-29558014-wbh8h\" (UID: \"69eb7aae-13a1-49a1-8446-646a2560a5cd\") " pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.384850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtgl\" (UniqueName: \"kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl\") pod \"auto-csr-approver-29558014-wbh8h\" (UID: \"69eb7aae-13a1-49a1-8446-646a2560a5cd\") " pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.472625 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.939794 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-wbh8h"] Mar 14 09:34:00 crc kubenswrapper[4687]: I0314 09:34:00.953625 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:34:01 crc kubenswrapper[4687]: I0314 09:34:01.235409 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" event={"ID":"69eb7aae-13a1-49a1-8446-646a2560a5cd","Type":"ContainerStarted","Data":"038d521017f53ace8935a753a4aef148c5b75405ad99f876c8738e8f2404dad4"} Mar 14 09:34:01 crc kubenswrapper[4687]: I0314 09:34:01.737602 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:01 crc kubenswrapper[4687]: E0314 09:34:01.737858 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:02 crc kubenswrapper[4687]: I0314 09:34:02.245010 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" event={"ID":"69eb7aae-13a1-49a1-8446-646a2560a5cd","Type":"ContainerStarted","Data":"8903d33588cc7b3c1f8fe62538fcd0bb9db8ee383122a8887a843f45a760cc3a"} Mar 14 09:34:02 crc kubenswrapper[4687]: I0314 09:34:02.269948 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" podStartSLOduration=1.337444967 podStartE2EDuration="2.269930585s" podCreationTimestamp="2026-03-14 09:34:00 +0000 UTC" firstStartedPulling="2026-03-14 09:34:00.9533635 +0000 UTC m=+2225.941603875" lastFinishedPulling="2026-03-14 09:34:01.885849118 +0000 UTC m=+2226.874089493" observedRunningTime="2026-03-14 09:34:02.262167303 +0000 UTC m=+2227.250407678" watchObservedRunningTime="2026-03-14 09:34:02.269930585 +0000 UTC m=+2227.258170960" Mar 14 09:34:03 crc kubenswrapper[4687]: I0314 09:34:03.259293 4687 generic.go:334] "Generic (PLEG): container finished" podID="69eb7aae-13a1-49a1-8446-646a2560a5cd" containerID="8903d33588cc7b3c1f8fe62538fcd0bb9db8ee383122a8887a843f45a760cc3a" exitCode=0 Mar 14 09:34:03 crc kubenswrapper[4687]: I0314 09:34:03.259369 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" event={"ID":"69eb7aae-13a1-49a1-8446-646a2560a5cd","Type":"ContainerDied","Data":"8903d33588cc7b3c1f8fe62538fcd0bb9db8ee383122a8887a843f45a760cc3a"} Mar 14 09:34:04 crc kubenswrapper[4687]: I0314 09:34:04.584664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:04 crc kubenswrapper[4687]: I0314 09:34:04.759662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgtgl\" (UniqueName: \"kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl\") pod \"69eb7aae-13a1-49a1-8446-646a2560a5cd\" (UID: \"69eb7aae-13a1-49a1-8446-646a2560a5cd\") " Mar 14 09:34:04 crc kubenswrapper[4687]: I0314 09:34:04.768318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl" (OuterVolumeSpecName: "kube-api-access-tgtgl") pod "69eb7aae-13a1-49a1-8446-646a2560a5cd" (UID: "69eb7aae-13a1-49a1-8446-646a2560a5cd"). InnerVolumeSpecName "kube-api-access-tgtgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:34:04 crc kubenswrapper[4687]: I0314 09:34:04.862966 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgtgl\" (UniqueName: \"kubernetes.io/projected/69eb7aae-13a1-49a1-8446-646a2560a5cd-kube-api-access-tgtgl\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.279080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" event={"ID":"69eb7aae-13a1-49a1-8446-646a2560a5cd","Type":"ContainerDied","Data":"038d521017f53ace8935a753a4aef148c5b75405ad99f876c8738e8f2404dad4"} Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.279617 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038d521017f53ace8935a753a4aef148c5b75405ad99f876c8738e8f2404dad4" Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.279509 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-wbh8h" Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.353966 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-w9s8f"] Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.361254 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-w9s8f"] Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.745605 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:34:05 crc kubenswrapper[4687]: E0314 09:34:05.745985 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:34:05 crc kubenswrapper[4687]: I0314 09:34:05.752469 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792e718d-3161-4cbc-989b-5abf93fd869e" path="/var/lib/kubelet/pods/792e718d-3161-4cbc-989b-5abf93fd869e/volumes" Mar 14 09:34:12 crc kubenswrapper[4687]: I0314 09:34:12.736543 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:12 crc kubenswrapper[4687]: E0314 09:34:12.738446 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:20 crc kubenswrapper[4687]: I0314 09:34:20.736669 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:34:20 crc kubenswrapper[4687]: E0314 09:34:20.737411 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:34:24 crc kubenswrapper[4687]: I0314 09:34:24.111167 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:34:24 crc kubenswrapper[4687]: I0314 09:34:24.111703 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:34:25 crc kubenswrapper[4687]: I0314 09:34:25.745278 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:25 crc kubenswrapper[4687]: E0314 09:34:25.745953 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:29 crc kubenswrapper[4687]: I0314 09:34:29.051274 4687 scope.go:117] "RemoveContainer" containerID="497e33fa5ba63e399c48d733ec0be8f8b3aa62f73ea89fdebbd994cccc2f58f2" Mar 14 09:34:35 crc kubenswrapper[4687]: I0314 09:34:35.748879 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:34:35 crc kubenswrapper[4687]: E0314 09:34:35.749787 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:34:36 crc kubenswrapper[4687]: I0314 09:34:36.737593 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:36 crc kubenswrapper[4687]: E0314 09:34:36.738059 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:48 crc kubenswrapper[4687]: I0314 09:34:48.737354 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:48 crc kubenswrapper[4687]: E0314 09:34:48.738248 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:49 crc kubenswrapper[4687]: I0314 09:34:49.737044 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:34:49 crc kubenswrapper[4687]: E0314 09:34:49.737261 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:34:54 crc kubenswrapper[4687]: I0314 09:34:54.111313 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:34:54 crc kubenswrapper[4687]: I0314 09:34:54.111856 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.669611 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:34:58 crc kubenswrapper[4687]: E0314 09:34:58.670508 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eb7aae-13a1-49a1-8446-646a2560a5cd" containerName="oc" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.670521 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb7aae-13a1-49a1-8446-646a2560a5cd" containerName="oc" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.670744 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="69eb7aae-13a1-49a1-8446-646a2560a5cd" containerName="oc" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.672114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.690002 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.723154 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt65t\" (UniqueName: \"kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.723866 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.724123 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.836487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.836581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.836795 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt65t\" (UniqueName: \"kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.837275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.837697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:58 crc kubenswrapper[4687]: I0314 09:34:58.861445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt65t\" (UniqueName: \"kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t\") pod \"redhat-marketplace-b9c5m\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.014168 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.479016 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:34:59 crc kubenswrapper[4687]: W0314 09:34:59.483683 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod620cd0ae_e99d_40ea_a445_ee8a1bf01e62.slice/crio-89e4944c343aceb13d43c46aa702abe5fa34faf729af2078bc31c84c73816bab WatchSource:0}: Error finding container 89e4944c343aceb13d43c46aa702abe5fa34faf729af2078bc31c84c73816bab: Status 404 returned error can't find the container with id 89e4944c343aceb13d43c46aa702abe5fa34faf729af2078bc31c84c73816bab Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.737713 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:34:59 crc kubenswrapper[4687]: E0314 09:34:59.738265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.802572 4687 generic.go:334] "Generic (PLEG): container finished" podID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerID="6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108" exitCode=0 Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.802626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerDied","Data":"6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108"} Mar 14 09:34:59 crc kubenswrapper[4687]: I0314 09:34:59.802656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerStarted","Data":"89e4944c343aceb13d43c46aa702abe5fa34faf729af2078bc31c84c73816bab"} Mar 14 09:35:01 crc kubenswrapper[4687]: I0314 09:35:01.736946 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:35:01 crc kubenswrapper[4687]: E0314 09:35:01.737481 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:35:01 crc kubenswrapper[4687]: I0314 09:35:01.852297 4687 generic.go:334] "Generic (PLEG): container finished" podID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerID="0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c" exitCode=0 Mar 14 09:35:01 crc kubenswrapper[4687]: I0314 09:35:01.852366 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerDied","Data":"0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c"} Mar 14 09:35:02 crc kubenswrapper[4687]: I0314 09:35:02.861927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerStarted","Data":"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08"} Mar 14 09:35:02 crc kubenswrapper[4687]: I0314 09:35:02.887230 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9c5m" podStartSLOduration=2.466125642 podStartE2EDuration="4.887211458s" podCreationTimestamp="2026-03-14 09:34:58 +0000 UTC" firstStartedPulling="2026-03-14 09:34:59.804510738 +0000 UTC m=+2284.792751113" lastFinishedPulling="2026-03-14 09:35:02.225596554 +0000 UTC m=+2287.213836929" observedRunningTime="2026-03-14 09:35:02.879005015 +0000 UTC m=+2287.867245390" watchObservedRunningTime="2026-03-14 09:35:02.887211458 +0000 UTC m=+2287.875451823" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.596258 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.599977 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.613779 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.673640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.673686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.673707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692nc\" (UniqueName: \"kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.776299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.776382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.776402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692nc\" (UniqueName: \"kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.776693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.776743 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.797361 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692nc\" (UniqueName: \"kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc\") pod \"community-operators-p2xcd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:05 crc kubenswrapper[4687]: I0314 09:35:05.933903 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:06 crc kubenswrapper[4687]: I0314 09:35:06.447141 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:06 crc kubenswrapper[4687]: W0314 09:35:06.453195 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17db1b7a_67d7_4d03_9781_a229b90052cd.slice/crio-4ecc77a1de5e7a13dedc4c27e2ef380ccfe003fe0d3da8f51e1e2149234a2124 WatchSource:0}: Error finding container 4ecc77a1de5e7a13dedc4c27e2ef380ccfe003fe0d3da8f51e1e2149234a2124: Status 404 returned error can't find the container with id 4ecc77a1de5e7a13dedc4c27e2ef380ccfe003fe0d3da8f51e1e2149234a2124 Mar 14 09:35:06 crc kubenswrapper[4687]: I0314 09:35:06.899211 4687 generic.go:334] "Generic (PLEG): container finished" podID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerID="a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef" exitCode=0 Mar 14 09:35:06 crc kubenswrapper[4687]: I0314 09:35:06.899323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerDied","Data":"a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef"} Mar 14 09:35:06 crc kubenswrapper[4687]: I0314 09:35:06.899720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerStarted","Data":"4ecc77a1de5e7a13dedc4c27e2ef380ccfe003fe0d3da8f51e1e2149234a2124"} Mar 14 09:35:07 crc kubenswrapper[4687]: I0314 09:35:07.922774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerStarted","Data":"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc"} Mar 14 09:35:08 crc kubenswrapper[4687]: I0314 09:35:08.936159 4687 generic.go:334] "Generic (PLEG): container finished" podID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerID="a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc" exitCode=0 Mar 14 09:35:08 crc kubenswrapper[4687]: I0314 09:35:08.936230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerDied","Data":"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc"} Mar 14 09:35:09 crc kubenswrapper[4687]: I0314 09:35:09.015158 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:09 crc kubenswrapper[4687]: I0314 09:35:09.015230 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:09 crc kubenswrapper[4687]: I0314 09:35:09.065323 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:09 crc kubenswrapper[4687]: I0314 09:35:09.947362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerStarted","Data":"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1"} Mar 14 09:35:09 crc kubenswrapper[4687]: I0314 09:35:09.976714 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p2xcd" podStartSLOduration=2.553471945 podStartE2EDuration="4.976692255s" podCreationTimestamp="2026-03-14 09:35:05 +0000 UTC" firstStartedPulling="2026-03-14 09:35:06.901076829 +0000 UTC m=+2291.889317204" lastFinishedPulling="2026-03-14 09:35:09.324297139 +0000 UTC m=+2294.312537514" observedRunningTime="2026-03-14 09:35:09.969888957 +0000 UTC m=+2294.958129342" watchObservedRunningTime="2026-03-14 09:35:09.976692255 +0000 UTC m=+2294.964932630" Mar 14 09:35:10 crc kubenswrapper[4687]: I0314 09:35:10.012717 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:11 crc kubenswrapper[4687]: I0314 09:35:11.365993 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:35:11 crc kubenswrapper[4687]: I0314 09:35:11.737987 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:35:11 crc kubenswrapper[4687]: E0314 09:35:11.738257 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:35:11 crc kubenswrapper[4687]: I0314 09:35:11.963270 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9c5m" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="registry-server" containerID="cri-o://a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08" gracePeriod=2 Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.438737 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.623440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content\") pod \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.623544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities\") pod \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.623602 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt65t\" (UniqueName: \"kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t\") pod \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\" (UID: \"620cd0ae-e99d-40ea-a445-ee8a1bf01e62\") " Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.624862 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities" (OuterVolumeSpecName: "utilities") pod "620cd0ae-e99d-40ea-a445-ee8a1bf01e62" (UID: "620cd0ae-e99d-40ea-a445-ee8a1bf01e62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.630511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t" (OuterVolumeSpecName: "kube-api-access-qt65t") pod "620cd0ae-e99d-40ea-a445-ee8a1bf01e62" (UID: "620cd0ae-e99d-40ea-a445-ee8a1bf01e62"). InnerVolumeSpecName "kube-api-access-qt65t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.650411 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "620cd0ae-e99d-40ea-a445-ee8a1bf01e62" (UID: "620cd0ae-e99d-40ea-a445-ee8a1bf01e62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.725638 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.725672 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.725683 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt65t\" (UniqueName: \"kubernetes.io/projected/620cd0ae-e99d-40ea-a445-ee8a1bf01e62-kube-api-access-qt65t\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.977466 4687 generic.go:334] "Generic (PLEG): container finished" podID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerID="a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08" exitCode=0 Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.977509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerDied","Data":"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08"} Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.977531 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c5m" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.977549 4687 scope.go:117] "RemoveContainer" containerID="a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08" Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.977535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c5m" event={"ID":"620cd0ae-e99d-40ea-a445-ee8a1bf01e62","Type":"ContainerDied","Data":"89e4944c343aceb13d43c46aa702abe5fa34faf729af2078bc31c84c73816bab"} Mar 14 09:35:12 crc kubenswrapper[4687]: I0314 09:35:12.998613 4687 scope.go:117] "RemoveContainer" containerID="0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.011867 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.020916 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c5m"] Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.040730 4687 scope.go:117] "RemoveContainer" containerID="6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.070025 4687 scope.go:117] "RemoveContainer" containerID="a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08" Mar 14 09:35:13 crc kubenswrapper[4687]: E0314 09:35:13.070586 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08\": container with ID starting with a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08 not found: ID does not exist" containerID="a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.070624 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08"} err="failed to get container status \"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08\": rpc error: code = NotFound desc = could not find container \"a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08\": container with ID starting with a433bfd1dcaae5268a22b75d054c867ace1d786cb9f79e902a7e2ba8b8d0dc08 not found: ID does not exist" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.070646 4687 scope.go:117] "RemoveContainer" containerID="0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c" Mar 14 09:35:13 crc kubenswrapper[4687]: E0314 09:35:13.071067 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c\": container with ID starting with 0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c not found: ID does not exist" containerID="0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.071097 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c"} err="failed to get container status \"0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c\": rpc error: code = NotFound desc = could not find container \"0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c\": container with ID starting with 0421602c558768a14ca1e3f4d286295fd05d9332a8927f31d81945cbec15746c not found: ID does not exist" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.071119 4687 scope.go:117] "RemoveContainer" containerID="6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108" Mar 14 09:35:13 crc kubenswrapper[4687]: E0314 09:35:13.071528 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108\": container with ID starting with 6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108 not found: ID does not exist" containerID="6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.071560 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108"} err="failed to get container status \"6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108\": rpc error: code = NotFound desc = could not find container \"6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108\": container with ID starting with 6d4cad55f92991092866994ab7d37cdf91dd8a75e766542fea53b367e1997108 not found: ID does not exist" Mar 14 09:35:13 crc kubenswrapper[4687]: I0314 09:35:13.750564 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" path="/var/lib/kubelet/pods/620cd0ae-e99d-40ea-a445-ee8a1bf01e62/volumes" Mar 14 09:35:15 crc kubenswrapper[4687]: I0314 09:35:15.934299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:15 crc kubenswrapper[4687]: I0314 09:35:15.934888 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:15 crc kubenswrapper[4687]: I0314 09:35:15.981023 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:16 crc kubenswrapper[4687]: I0314 09:35:16.057929 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:16 crc kubenswrapper[4687]: I0314 09:35:16.736803 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:35:16 crc kubenswrapper[4687]: E0314 09:35:16.737095 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:35:17 crc kubenswrapper[4687]: I0314 09:35:17.178062 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.025649 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p2xcd" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="registry-server" containerID="cri-o://45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1" gracePeriod=2 Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.504896 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.649046 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content\") pod \"17db1b7a-67d7-4d03-9781-a229b90052cd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.649304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-692nc\" (UniqueName: \"kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc\") pod \"17db1b7a-67d7-4d03-9781-a229b90052cd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.649526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities\") pod \"17db1b7a-67d7-4d03-9781-a229b90052cd\" (UID: \"17db1b7a-67d7-4d03-9781-a229b90052cd\") " Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.650729 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities" (OuterVolumeSpecName: "utilities") pod "17db1b7a-67d7-4d03-9781-a229b90052cd" (UID: "17db1b7a-67d7-4d03-9781-a229b90052cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.655364 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc" (OuterVolumeSpecName: "kube-api-access-692nc") pod "17db1b7a-67d7-4d03-9781-a229b90052cd" (UID: "17db1b7a-67d7-4d03-9781-a229b90052cd"). InnerVolumeSpecName "kube-api-access-692nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.700389 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17db1b7a-67d7-4d03-9781-a229b90052cd" (UID: "17db1b7a-67d7-4d03-9781-a229b90052cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.752219 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-692nc\" (UniqueName: \"kubernetes.io/projected/17db1b7a-67d7-4d03-9781-a229b90052cd-kube-api-access-692nc\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.752266 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:18 crc kubenswrapper[4687]: I0314 09:35:18.752279 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17db1b7a-67d7-4d03-9781-a229b90052cd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.035286 4687 generic.go:334] "Generic (PLEG): container finished" podID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerID="45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1" exitCode=0 Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.035344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerDied","Data":"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1"} Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.035376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2xcd" event={"ID":"17db1b7a-67d7-4d03-9781-a229b90052cd","Type":"ContainerDied","Data":"4ecc77a1de5e7a13dedc4c27e2ef380ccfe003fe0d3da8f51e1e2149234a2124"} Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.035382 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2xcd" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.035395 4687 scope.go:117] "RemoveContainer" containerID="45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.062502 4687 scope.go:117] "RemoveContainer" containerID="a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.068377 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.076865 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p2xcd"] Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.088064 4687 scope.go:117] "RemoveContainer" containerID="a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.138553 4687 scope.go:117] "RemoveContainer" containerID="45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1" Mar 14 09:35:19 crc kubenswrapper[4687]: E0314 09:35:19.139105 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1\": container with ID starting with 45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1 not found: ID does not exist" containerID="45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.139142 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1"} err="failed to get container status \"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1\": rpc error: code = NotFound desc = could not find container \"45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1\": container with ID starting with 45ac4a53443789f7aa50a2dc87bbd898707ae338b88e74d47559a00893c137f1 not found: ID does not exist" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.139167 4687 scope.go:117] "RemoveContainer" containerID="a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc" Mar 14 09:35:19 crc kubenswrapper[4687]: E0314 09:35:19.139424 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc\": container with ID starting with a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc not found: ID does not exist" containerID="a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.139456 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc"} err="failed to get container status \"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc\": rpc error: code = NotFound desc = could not find container \"a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc\": container with ID starting with a6810a48d5e6c642920ed7f879580a83fc7393e23a7d540704492019512888fc not found: ID does not exist" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.139475 4687 scope.go:117] "RemoveContainer" containerID="a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef" Mar 14 09:35:19 crc kubenswrapper[4687]: E0314 09:35:19.139819 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef\": container with ID starting with a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef not found: ID does not exist" containerID="a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.139869 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef"} err="failed to get container status \"a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef\": rpc error: code = NotFound desc = could not find container \"a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef\": container with ID starting with a416018849a73915806755fc6bd02ae18e30b9aa71c54a897c60ef195ef13fef not found: ID does not exist" Mar 14 09:35:19 crc kubenswrapper[4687]: I0314 09:35:19.750202 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" path="/var/lib/kubelet/pods/17db1b7a-67d7-4d03-9781-a229b90052cd/volumes" Mar 14 09:35:23 crc kubenswrapper[4687]: I0314 09:35:23.737441 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:35:23 crc kubenswrapper[4687]: E0314 09:35:23.737919 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:35:24 crc kubenswrapper[4687]: I0314 09:35:24.110981 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:35:24 crc kubenswrapper[4687]: I0314 09:35:24.111040 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:35:24 crc kubenswrapper[4687]: I0314 09:35:24.111087 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:35:24 crc kubenswrapper[4687]: I0314 09:35:24.112129 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:35:24 crc kubenswrapper[4687]: I0314 09:35:24.112186 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" gracePeriod=600 Mar 14 09:35:24 crc kubenswrapper[4687]: E0314 09:35:24.245054 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:35:25 crc kubenswrapper[4687]: I0314 09:35:25.102277 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" exitCode=0 Mar 14 09:35:25 crc kubenswrapper[4687]: I0314 09:35:25.102325 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267"} Mar 14 09:35:25 crc kubenswrapper[4687]: I0314 09:35:25.102383 4687 scope.go:117] "RemoveContainer" containerID="974090fefb4f85daaf51bea1a1f723869d76e0c7e5b925bd7904be6bc7e500b3" Mar 14 09:35:25 crc kubenswrapper[4687]: I0314 09:35:25.103079 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:35:25 crc kubenswrapper[4687]: E0314 09:35:25.103512 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:35:28 crc kubenswrapper[4687]: I0314 09:35:28.737495 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:35:28 crc kubenswrapper[4687]: E0314 09:35:28.737970 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:35:34 crc kubenswrapper[4687]: I0314 09:35:34.737292 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:35:34 crc kubenswrapper[4687]: E0314 09:35:34.737986 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:35:38 crc kubenswrapper[4687]: I0314 09:35:38.737250 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:35:38 crc kubenswrapper[4687]: E0314 09:35:38.738022 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:35:39 crc kubenswrapper[4687]: I0314 09:35:39.738028 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:35:39 crc kubenswrapper[4687]: E0314 09:35:39.738280 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:35:46 crc kubenswrapper[4687]: I0314 09:35:46.736801 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:35:46 crc kubenswrapper[4687]: E0314 09:35:46.737560 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:35:51 crc kubenswrapper[4687]: I0314 09:35:51.737527 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:35:51 crc kubenswrapper[4687]: E0314 09:35:51.738334 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:35:53 crc kubenswrapper[4687]: I0314 09:35:53.737994 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:35:53 crc kubenswrapper[4687]: E0314 09:35:53.738498 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:35:59 crc kubenswrapper[4687]: I0314 09:35:59.737130 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:35:59 crc kubenswrapper[4687]: E0314 09:35:59.737856 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145432 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558016-hbxxp"] Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145793 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145810 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145819 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145827 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145864 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145870 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145880 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145886 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145896 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145901 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: E0314 09:36:00.145911 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.145917 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.146083 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="17db1b7a-67d7-4d03-9781-a229b90052cd" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.146113 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="620cd0ae-e99d-40ea-a445-ee8a1bf01e62" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.146762 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.148695 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.148859 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.157167 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-hbxxp"] Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.157444 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.279931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snnv4\" (UniqueName: \"kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4\") pod \"auto-csr-approver-29558016-hbxxp\" (UID: \"2cb89acd-29c7-484a-915a-5d8df021e544\") " pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.382887 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snnv4\" (UniqueName: \"kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4\") pod \"auto-csr-approver-29558016-hbxxp\" (UID: \"2cb89acd-29c7-484a-915a-5d8df021e544\") " pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.401445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snnv4\" (UniqueName: \"kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4\") pod \"auto-csr-approver-29558016-hbxxp\" (UID: \"2cb89acd-29c7-484a-915a-5d8df021e544\") " pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.478651 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:00 crc kubenswrapper[4687]: I0314 09:36:00.945656 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-hbxxp"] Mar 14 09:36:01 crc kubenswrapper[4687]: I0314 09:36:01.445911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" event={"ID":"2cb89acd-29c7-484a-915a-5d8df021e544","Type":"ContainerStarted","Data":"dcd1e87a6a383e264412b548d4efa772cda6a8b51eb1995d21697dffb1e3dd12"} Mar 14 09:36:02 crc kubenswrapper[4687]: I0314 09:36:02.457138 4687 generic.go:334] "Generic (PLEG): container finished" podID="2cb89acd-29c7-484a-915a-5d8df021e544" containerID="1b4d623566169f09e68dfc4ab0fc384e0e786b6efe1d8504d0ce5ac1d12d25ec" exitCode=0 Mar 14 09:36:02 crc kubenswrapper[4687]: I0314 09:36:02.457255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" event={"ID":"2cb89acd-29c7-484a-915a-5d8df021e544","Type":"ContainerDied","Data":"1b4d623566169f09e68dfc4ab0fc384e0e786b6efe1d8504d0ce5ac1d12d25ec"} Mar 14 09:36:03 crc kubenswrapper[4687]: I0314 09:36:03.737031 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:36:03 crc kubenswrapper[4687]: E0314 09:36:03.737708 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:36:03 crc kubenswrapper[4687]: I0314 09:36:03.850018 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:03 crc kubenswrapper[4687]: I0314 09:36:03.958119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snnv4\" (UniqueName: \"kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4\") pod \"2cb89acd-29c7-484a-915a-5d8df021e544\" (UID: \"2cb89acd-29c7-484a-915a-5d8df021e544\") " Mar 14 09:36:03 crc kubenswrapper[4687]: I0314 09:36:03.966556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4" (OuterVolumeSpecName: "kube-api-access-snnv4") pod "2cb89acd-29c7-484a-915a-5d8df021e544" (UID: "2cb89acd-29c7-484a-915a-5d8df021e544"). InnerVolumeSpecName "kube-api-access-snnv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.061158 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snnv4\" (UniqueName: \"kubernetes.io/projected/2cb89acd-29c7-484a-915a-5d8df021e544-kube-api-access-snnv4\") on node \"crc\" DevicePath \"\"" Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.478608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" event={"ID":"2cb89acd-29c7-484a-915a-5d8df021e544","Type":"ContainerDied","Data":"dcd1e87a6a383e264412b548d4efa772cda6a8b51eb1995d21697dffb1e3dd12"} Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.478671 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd1e87a6a383e264412b548d4efa772cda6a8b51eb1995d21697dffb1e3dd12" Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.478696 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-hbxxp" Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.941460 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-9t6vp"] Mar 14 09:36:04 crc kubenswrapper[4687]: I0314 09:36:04.952182 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-9t6vp"] Mar 14 09:36:05 crc kubenswrapper[4687]: I0314 09:36:05.747757 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a60f94-49bb-4245-8704-a041c5f38c2e" path="/var/lib/kubelet/pods/c3a60f94-49bb-4245-8704-a041c5f38c2e/volumes" Mar 14 09:36:06 crc kubenswrapper[4687]: I0314 09:36:06.736405 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:36:06 crc kubenswrapper[4687]: E0314 09:36:06.736720 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:36:10 crc kubenswrapper[4687]: I0314 09:36:10.737162 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:36:10 crc kubenswrapper[4687]: E0314 09:36:10.737908 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:18 crc kubenswrapper[4687]: I0314 09:36:18.736528 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:36:18 crc kubenswrapper[4687]: E0314 09:36:18.737108 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:36:20 crc kubenswrapper[4687]: I0314 09:36:20.737236 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:36:20 crc kubenswrapper[4687]: E0314 09:36:20.737757 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:36:21 crc kubenswrapper[4687]: I0314 09:36:21.737041 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:36:21 crc kubenswrapper[4687]: E0314 09:36:21.739285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:29 crc kubenswrapper[4687]: I0314 09:36:29.189477 4687 scope.go:117] "RemoveContainer" containerID="13935bcc5af7b83625fcf8c37bbe3c3c3658c05e020d63c5187832ed3c2529bc" Mar 14 09:36:32 crc kubenswrapper[4687]: I0314 09:36:32.737278 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:36:32 crc kubenswrapper[4687]: E0314 09:36:32.737979 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:36:34 crc kubenswrapper[4687]: I0314 09:36:34.737118 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:36:34 crc kubenswrapper[4687]: E0314 09:36:34.737601 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:36:34 crc kubenswrapper[4687]: I0314 09:36:34.738380 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:36:35 crc kubenswrapper[4687]: I0314 09:36:35.756725 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf"} Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.219718 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.220468 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.825572 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" exitCode=1 Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.825625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf"} Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.825661 4687 scope.go:117] "RemoveContainer" containerID="48230b575bf2f373b6b9d55ee16c859dc03d8bd3111ddd68fcec167fa848ad08" Mar 14 09:36:42 crc kubenswrapper[4687]: I0314 09:36:42.826482 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:36:42 crc kubenswrapper[4687]: E0314 09:36:42.826745 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:43 crc kubenswrapper[4687]: I0314 09:36:43.837699 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:36:43 crc kubenswrapper[4687]: E0314 09:36:43.839988 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:46 crc kubenswrapper[4687]: I0314 09:36:46.738520 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:36:46 crc kubenswrapper[4687]: E0314 09:36:46.739149 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:36:49 crc kubenswrapper[4687]: I0314 09:36:49.736777 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:36:50 crc kubenswrapper[4687]: I0314 09:36:50.899718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a"} Mar 14 09:36:52 crc kubenswrapper[4687]: I0314 09:36:52.128144 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:36:52 crc kubenswrapper[4687]: I0314 09:36:52.128511 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:36:52 crc kubenswrapper[4687]: I0314 09:36:52.219947 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:36:52 crc kubenswrapper[4687]: I0314 09:36:52.220019 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:36:52 crc kubenswrapper[4687]: I0314 09:36:52.220928 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:36:52 crc kubenswrapper[4687]: E0314 09:36:52.221189 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:36:57 crc kubenswrapper[4687]: I0314 09:36:57.954352 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" exitCode=1 Mar 14 09:36:57 crc kubenswrapper[4687]: I0314 09:36:57.954386 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a"} Mar 14 09:36:57 crc kubenswrapper[4687]: I0314 09:36:57.954927 4687 scope.go:117] "RemoveContainer" containerID="892d3057e332f3215a1762ab104e647926db7abd6dd6a10905267612d2e60086" Mar 14 09:36:57 crc kubenswrapper[4687]: I0314 09:36:57.963034 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:36:57 crc kubenswrapper[4687]: E0314 09:36:57.963500 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:01 crc kubenswrapper[4687]: I0314 09:37:01.737581 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:37:01 crc kubenswrapper[4687]: E0314 09:37:01.738279 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:37:02 crc kubenswrapper[4687]: I0314 09:37:02.128172 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:37:02 crc kubenswrapper[4687]: I0314 09:37:02.128261 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:37:02 crc kubenswrapper[4687]: I0314 09:37:02.129148 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:37:02 crc kubenswrapper[4687]: E0314 09:37:02.129472 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:05 crc kubenswrapper[4687]: I0314 09:37:05.746492 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:37:05 crc kubenswrapper[4687]: E0314 09:37:05.747084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:37:14 crc kubenswrapper[4687]: I0314 09:37:14.737312 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:37:14 crc kubenswrapper[4687]: E0314 09:37:14.738090 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:37:16 crc kubenswrapper[4687]: I0314 09:37:16.736949 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:37:16 crc kubenswrapper[4687]: E0314 09:37:16.737178 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:37:16 crc kubenswrapper[4687]: I0314 09:37:16.737238 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:37:16 crc kubenswrapper[4687]: E0314 09:37:16.737521 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:27 crc kubenswrapper[4687]: I0314 09:37:27.737659 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:37:27 crc kubenswrapper[4687]: E0314 09:37:27.738325 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:29 crc kubenswrapper[4687]: I0314 09:37:29.737595 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:37:29 crc kubenswrapper[4687]: E0314 09:37:29.738213 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:37:30 crc kubenswrapper[4687]: I0314 09:37:30.737953 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:37:30 crc kubenswrapper[4687]: E0314 09:37:30.738268 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:37:42 crc kubenswrapper[4687]: I0314 09:37:42.737381 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:37:42 crc kubenswrapper[4687]: E0314 09:37:42.738131 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:37:42 crc kubenswrapper[4687]: I0314 09:37:42.739261 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:37:42 crc kubenswrapper[4687]: E0314 09:37:42.739549 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:44 crc kubenswrapper[4687]: I0314 09:37:44.737289 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:37:44 crc kubenswrapper[4687]: E0314 09:37:44.737782 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:37:53 crc kubenswrapper[4687]: I0314 09:37:53.737853 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:37:53 crc kubenswrapper[4687]: I0314 09:37:53.738672 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:37:53 crc kubenswrapper[4687]: E0314 09:37:53.739084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:37:53 crc kubenswrapper[4687]: E0314 09:37:53.739115 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:37:57 crc kubenswrapper[4687]: I0314 09:37:57.737935 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:37:57 crc kubenswrapper[4687]: E0314 09:37:57.738828 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.165675 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558018-jnzlz"] Mar 14 09:38:00 crc kubenswrapper[4687]: E0314 09:38:00.168052 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb89acd-29c7-484a-915a-5d8df021e544" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.168178 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb89acd-29c7-484a-915a-5d8df021e544" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.168522 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb89acd-29c7-484a-915a-5d8df021e544" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.169812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.173545 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.173815 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.176225 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.187185 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-jnzlz"] Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.216046 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8l29\" (UniqueName: \"kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29\") pod \"auto-csr-approver-29558018-jnzlz\" (UID: \"11ee5faf-bda4-447d-a5f8-ae23e22a75d4\") " pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.317341 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8l29\" (UniqueName: \"kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29\") pod \"auto-csr-approver-29558018-jnzlz\" (UID: \"11ee5faf-bda4-447d-a5f8-ae23e22a75d4\") " pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.337937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8l29\" (UniqueName: \"kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29\") pod \"auto-csr-approver-29558018-jnzlz\" (UID: \"11ee5faf-bda4-447d-a5f8-ae23e22a75d4\") " pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.489353 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:00 crc kubenswrapper[4687]: I0314 09:38:00.936783 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-jnzlz"] Mar 14 09:38:01 crc kubenswrapper[4687]: I0314 09:38:01.553984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" event={"ID":"11ee5faf-bda4-447d-a5f8-ae23e22a75d4","Type":"ContainerStarted","Data":"bda3c04a085aa55305ce2bc331adde61802207880e63c39b9667c4cc141e32b3"} Mar 14 09:38:02 crc kubenswrapper[4687]: I0314 09:38:02.564799 4687 generic.go:334] "Generic (PLEG): container finished" podID="11ee5faf-bda4-447d-a5f8-ae23e22a75d4" containerID="d80a220eca2f1b256e44efbf0dda2c498c9634b5027dfe8f4d52b47a2a695e54" exitCode=0 Mar 14 09:38:02 crc kubenswrapper[4687]: I0314 09:38:02.564851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" event={"ID":"11ee5faf-bda4-447d-a5f8-ae23e22a75d4","Type":"ContainerDied","Data":"d80a220eca2f1b256e44efbf0dda2c498c9634b5027dfe8f4d52b47a2a695e54"} Mar 14 09:38:03 crc kubenswrapper[4687]: I0314 09:38:03.896451 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.005555 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8l29\" (UniqueName: \"kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29\") pod \"11ee5faf-bda4-447d-a5f8-ae23e22a75d4\" (UID: \"11ee5faf-bda4-447d-a5f8-ae23e22a75d4\") " Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.012617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29" (OuterVolumeSpecName: "kube-api-access-w8l29") pod "11ee5faf-bda4-447d-a5f8-ae23e22a75d4" (UID: "11ee5faf-bda4-447d-a5f8-ae23e22a75d4"). InnerVolumeSpecName "kube-api-access-w8l29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.107465 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8l29\" (UniqueName: \"kubernetes.io/projected/11ee5faf-bda4-447d-a5f8-ae23e22a75d4-kube-api-access-w8l29\") on node \"crc\" DevicePath \"\"" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.581876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" event={"ID":"11ee5faf-bda4-447d-a5f8-ae23e22a75d4","Type":"ContainerDied","Data":"bda3c04a085aa55305ce2bc331adde61802207880e63c39b9667c4cc141e32b3"} Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.582182 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda3c04a085aa55305ce2bc331adde61802207880e63c39b9667c4cc141e32b3" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.582007 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-jnzlz" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.737305 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:38:04 crc kubenswrapper[4687]: E0314 09:38:04.737669 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.965979 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-vdjt9"] Mar 14 09:38:04 crc kubenswrapper[4687]: I0314 09:38:04.978429 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-vdjt9"] Mar 14 09:38:05 crc kubenswrapper[4687]: I0314 09:38:05.748846 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c895e0-3e7e-4b68-976e-aad54eb4bcd1" path="/var/lib/kubelet/pods/e4c895e0-3e7e-4b68-976e-aad54eb4bcd1/volumes" Mar 14 09:38:08 crc kubenswrapper[4687]: I0314 09:38:08.737084 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:38:08 crc kubenswrapper[4687]: E0314 09:38:08.737808 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:38:11 crc kubenswrapper[4687]: I0314 09:38:11.737165 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:38:11 crc kubenswrapper[4687]: E0314 09:38:11.737668 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:38:17 crc kubenswrapper[4687]: I0314 09:38:17.737226 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:38:17 crc kubenswrapper[4687]: E0314 09:38:17.739242 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:38:20 crc kubenswrapper[4687]: I0314 09:38:20.737690 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:38:20 crc kubenswrapper[4687]: E0314 09:38:20.739551 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:38:25 crc kubenswrapper[4687]: I0314 09:38:25.747269 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:38:25 crc kubenswrapper[4687]: E0314 09:38:25.748293 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:38:28 crc kubenswrapper[4687]: I0314 09:38:28.737895 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:38:28 crc kubenswrapper[4687]: E0314 09:38:28.738439 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:38:29 crc kubenswrapper[4687]: I0314 09:38:29.301466 4687 scope.go:117] "RemoveContainer" containerID="febc582d305c277456e25bf52527881a6f23fe81a888e48ff71ccdc4437fee2a" Mar 14 09:38:35 crc kubenswrapper[4687]: I0314 09:38:35.742812 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:38:35 crc kubenswrapper[4687]: E0314 09:38:35.743565 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:38:37 crc kubenswrapper[4687]: I0314 09:38:37.737209 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:38:37 crc kubenswrapper[4687]: E0314 09:38:37.737807 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:38:40 crc kubenswrapper[4687]: I0314 09:38:40.737458 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:38:40 crc kubenswrapper[4687]: E0314 09:38:40.737961 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:38:47 crc kubenswrapper[4687]: I0314 09:38:47.738145 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:38:47 crc kubenswrapper[4687]: E0314 09:38:47.739681 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:38:48 crc kubenswrapper[4687]: I0314 09:38:48.737470 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:38:48 crc kubenswrapper[4687]: E0314 09:38:48.738020 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:38:54 crc kubenswrapper[4687]: I0314 09:38:54.738297 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:38:54 crc kubenswrapper[4687]: E0314 09:38:54.738982 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:39:00 crc kubenswrapper[4687]: I0314 09:39:00.737707 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:39:00 crc kubenswrapper[4687]: E0314 09:39:00.738604 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:39:03 crc kubenswrapper[4687]: I0314 09:39:03.737399 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:39:03 crc kubenswrapper[4687]: E0314 09:39:03.738186 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:39:09 crc kubenswrapper[4687]: I0314 09:39:09.736839 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:39:09 crc kubenswrapper[4687]: E0314 09:39:09.737545 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:39:14 crc kubenswrapper[4687]: I0314 09:39:14.736772 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:39:14 crc kubenswrapper[4687]: E0314 09:39:14.737439 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:39:17 crc kubenswrapper[4687]: I0314 09:39:17.737166 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:39:17 crc kubenswrapper[4687]: E0314 09:39:17.737785 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:39:22 crc kubenswrapper[4687]: I0314 09:39:22.736991 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:39:22 crc kubenswrapper[4687]: E0314 09:39:22.737749 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:39:26 crc kubenswrapper[4687]: I0314 09:39:26.738236 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:39:26 crc kubenswrapper[4687]: E0314 09:39:26.739378 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:39:30 crc kubenswrapper[4687]: I0314 09:39:30.737175 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:39:30 crc kubenswrapper[4687]: E0314 09:39:30.737712 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:39:37 crc kubenswrapper[4687]: I0314 09:39:37.736569 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:39:37 crc kubenswrapper[4687]: E0314 09:39:37.737355 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:39:39 crc kubenswrapper[4687]: I0314 09:39:39.737676 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:39:39 crc kubenswrapper[4687]: E0314 09:39:39.738160 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:39:41 crc kubenswrapper[4687]: I0314 09:39:41.737721 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:39:41 crc kubenswrapper[4687]: E0314 09:39:41.738085 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:39:50 crc kubenswrapper[4687]: I0314 09:39:50.738097 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:39:50 crc kubenswrapper[4687]: I0314 09:39:50.738784 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:39:50 crc kubenswrapper[4687]: E0314 09:39:50.739068 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:39:50 crc kubenswrapper[4687]: E0314 09:39:50.739263 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:39:53 crc kubenswrapper[4687]: I0314 09:39:53.736501 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:39:53 crc kubenswrapper[4687]: E0314 09:39:53.737939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.137670 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558020-kptwb"] Mar 14 09:40:00 crc kubenswrapper[4687]: E0314 09:40:00.138679 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ee5faf-bda4-447d-a5f8-ae23e22a75d4" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.138700 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ee5faf-bda4-447d-a5f8-ae23e22a75d4" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.138899 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ee5faf-bda4-447d-a5f8-ae23e22a75d4" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.139735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.141575 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.142215 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.143371 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.148159 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-kptwb"] Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.189405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9qb\" (UniqueName: \"kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb\") pod \"auto-csr-approver-29558020-kptwb\" (UID: \"ac329c32-9aee-4197-bc3c-a9ecb12e057c\") " pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.291697 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9qb\" (UniqueName: \"kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb\") pod \"auto-csr-approver-29558020-kptwb\" (UID: \"ac329c32-9aee-4197-bc3c-a9ecb12e057c\") " pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.313992 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9qb\" (UniqueName: \"kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb\") pod \"auto-csr-approver-29558020-kptwb\" (UID: \"ac329c32-9aee-4197-bc3c-a9ecb12e057c\") " pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.462865 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.939284 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-kptwb"] Mar 14 09:40:00 crc kubenswrapper[4687]: I0314 09:40:00.944753 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:40:01 crc kubenswrapper[4687]: I0314 09:40:01.696549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-kptwb" event={"ID":"ac329c32-9aee-4197-bc3c-a9ecb12e057c","Type":"ContainerStarted","Data":"e90a7aae1973e5f05c48603fef3d3d0715352408b1820031a6db7bddf66a742f"} Mar 14 09:40:01 crc kubenswrapper[4687]: I0314 09:40:01.737074 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:40:01 crc kubenswrapper[4687]: I0314 09:40:01.737280 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:40:01 crc kubenswrapper[4687]: E0314 09:40:01.737318 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:40:01 crc kubenswrapper[4687]: E0314 09:40:01.737519 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:40:02 crc kubenswrapper[4687]: I0314 09:40:02.706402 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-kptwb" event={"ID":"ac329c32-9aee-4197-bc3c-a9ecb12e057c","Type":"ContainerStarted","Data":"036cef85ef83953c4bc8aca8324062f5568c3d17dc4f6d23ad78452af9ca072f"} Mar 14 09:40:02 crc kubenswrapper[4687]: I0314 09:40:02.723871 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558020-kptwb" podStartSLOduration=1.359485836 podStartE2EDuration="2.723853816s" podCreationTimestamp="2026-03-14 09:40:00 +0000 UTC" firstStartedPulling="2026-03-14 09:40:00.944572701 +0000 UTC m=+2585.932813076" lastFinishedPulling="2026-03-14 09:40:02.308940691 +0000 UTC m=+2587.297181056" observedRunningTime="2026-03-14 09:40:02.716925616 +0000 UTC m=+2587.705165991" watchObservedRunningTime="2026-03-14 09:40:02.723853816 +0000 UTC m=+2587.712094191" Mar 14 09:40:03 crc kubenswrapper[4687]: I0314 09:40:03.717408 4687 generic.go:334] "Generic (PLEG): container finished" podID="ac329c32-9aee-4197-bc3c-a9ecb12e057c" containerID="036cef85ef83953c4bc8aca8324062f5568c3d17dc4f6d23ad78452af9ca072f" exitCode=0 Mar 14 09:40:03 crc kubenswrapper[4687]: I0314 09:40:03.717514 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-kptwb" event={"ID":"ac329c32-9aee-4197-bc3c-a9ecb12e057c","Type":"ContainerDied","Data":"036cef85ef83953c4bc8aca8324062f5568c3d17dc4f6d23ad78452af9ca072f"} Mar 14 09:40:04 crc kubenswrapper[4687]: I0314 09:40:04.737270 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:40:04 crc kubenswrapper[4687]: E0314 09:40:04.737603 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.103978 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.305971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs9qb\" (UniqueName: \"kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb\") pod \"ac329c32-9aee-4197-bc3c-a9ecb12e057c\" (UID: \"ac329c32-9aee-4197-bc3c-a9ecb12e057c\") " Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.311759 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb" (OuterVolumeSpecName: "kube-api-access-fs9qb") pod "ac329c32-9aee-4197-bc3c-a9ecb12e057c" (UID: "ac329c32-9aee-4197-bc3c-a9ecb12e057c"). InnerVolumeSpecName "kube-api-access-fs9qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.408078 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs9qb\" (UniqueName: \"kubernetes.io/projected/ac329c32-9aee-4197-bc3c-a9ecb12e057c-kube-api-access-fs9qb\") on node \"crc\" DevicePath \"\"" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.747044 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-kptwb" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.758992 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-kptwb" event={"ID":"ac329c32-9aee-4197-bc3c-a9ecb12e057c","Type":"ContainerDied","Data":"e90a7aae1973e5f05c48603fef3d3d0715352408b1820031a6db7bddf66a742f"} Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.759047 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90a7aae1973e5f05c48603fef3d3d0715352408b1820031a6db7bddf66a742f" Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.794941 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-wbh8h"] Mar 14 09:40:05 crc kubenswrapper[4687]: I0314 09:40:05.804689 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-wbh8h"] Mar 14 09:40:07 crc kubenswrapper[4687]: I0314 09:40:07.747485 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69eb7aae-13a1-49a1-8446-646a2560a5cd" path="/var/lib/kubelet/pods/69eb7aae-13a1-49a1-8446-646a2560a5cd/volumes" Mar 14 09:40:14 crc kubenswrapper[4687]: I0314 09:40:14.737084 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:40:14 crc kubenswrapper[4687]: E0314 09:40:14.737791 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:40:15 crc kubenswrapper[4687]: I0314 09:40:15.742436 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:40:15 crc kubenswrapper[4687]: E0314 09:40:15.742810 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:40:19 crc kubenswrapper[4687]: I0314 09:40:19.737571 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:40:19 crc kubenswrapper[4687]: E0314 09:40:19.738293 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:25 crc kubenswrapper[4687]: I0314 09:40:25.749254 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:40:26 crc kubenswrapper[4687]: I0314 09:40:26.920280 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5"} Mar 14 09:40:28 crc kubenswrapper[4687]: I0314 09:40:28.736875 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:40:28 crc kubenswrapper[4687]: E0314 09:40:28.737584 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:40:29 crc kubenswrapper[4687]: I0314 09:40:29.391659 4687 scope.go:117] "RemoveContainer" containerID="8903d33588cc7b3c1f8fe62538fcd0bb9db8ee383122a8887a843f45a760cc3a" Mar 14 09:40:33 crc kubenswrapper[4687]: I0314 09:40:33.737062 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:40:33 crc kubenswrapper[4687]: E0314 09:40:33.737777 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:40 crc kubenswrapper[4687]: I0314 09:40:40.736816 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:40:40 crc kubenswrapper[4687]: E0314 09:40:40.737771 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:40:44 crc kubenswrapper[4687]: I0314 09:40:44.737018 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:40:44 crc kubenswrapper[4687]: E0314 09:40:44.737644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.931556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:40:50 crc kubenswrapper[4687]: E0314 09:40:50.932622 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac329c32-9aee-4197-bc3c-a9ecb12e057c" containerName="oc" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.932639 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac329c32-9aee-4197-bc3c-a9ecb12e057c" containerName="oc" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.932925 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac329c32-9aee-4197-bc3c-a9ecb12e057c" containerName="oc" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.934947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.943147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrhb\" (UniqueName: \"kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.943227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.943341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:50 crc kubenswrapper[4687]: I0314 09:40:50.951500 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.045913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrhb\" (UniqueName: \"kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.046302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.046455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.046902 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.046971 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.075033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrhb\" (UniqueName: \"kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb\") pod \"redhat-operators-mbf7f\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.258852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:40:51 crc kubenswrapper[4687]: I0314 09:40:51.770816 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:40:52 crc kubenswrapper[4687]: I0314 09:40:52.199207 4687 generic.go:334] "Generic (PLEG): container finished" podID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerID="7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c" exitCode=0 Mar 14 09:40:52 crc kubenswrapper[4687]: I0314 09:40:52.199481 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerDied","Data":"7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c"} Mar 14 09:40:52 crc kubenswrapper[4687]: I0314 09:40:52.199529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerStarted","Data":"4a4561fecb91b5f1ef08608bec4fed3cd1f3f47834f4b360cb6752a7cbbf238a"} Mar 14 09:40:53 crc kubenswrapper[4687]: I0314 09:40:53.212882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerStarted","Data":"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e"} Mar 14 09:40:54 crc kubenswrapper[4687]: I0314 09:40:54.736979 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:40:54 crc kubenswrapper[4687]: E0314 09:40:54.737624 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:40:57 crc kubenswrapper[4687]: I0314 09:40:57.736848 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:40:57 crc kubenswrapper[4687]: E0314 09:40:57.737581 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:40:59 crc kubenswrapper[4687]: I0314 09:40:59.267756 4687 generic.go:334] "Generic (PLEG): container finished" podID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerID="f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e" exitCode=0 Mar 14 09:40:59 crc kubenswrapper[4687]: I0314 09:40:59.267828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerDied","Data":"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e"} Mar 14 09:41:00 crc kubenswrapper[4687]: I0314 09:41:00.281525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerStarted","Data":"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe"} Mar 14 09:41:00 crc kubenswrapper[4687]: I0314 09:41:00.307645 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbf7f" podStartSLOduration=2.852720564 podStartE2EDuration="10.307630558s" podCreationTimestamp="2026-03-14 09:40:50 +0000 UTC" firstStartedPulling="2026-03-14 09:40:52.200770158 +0000 UTC m=+2637.189010533" lastFinishedPulling="2026-03-14 09:40:59.655680152 +0000 UTC m=+2644.643920527" observedRunningTime="2026-03-14 09:41:00.306206693 +0000 UTC m=+2645.294447088" watchObservedRunningTime="2026-03-14 09:41:00.307630558 +0000 UTC m=+2645.295870933" Mar 14 09:41:01 crc kubenswrapper[4687]: I0314 09:41:01.259779 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:01 crc kubenswrapper[4687]: I0314 09:41:01.260135 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:02 crc kubenswrapper[4687]: I0314 09:41:02.307975 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbf7f" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="registry-server" probeResult="failure" output=< Mar 14 09:41:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:41:02 crc kubenswrapper[4687]: > Mar 14 09:41:08 crc kubenswrapper[4687]: I0314 09:41:08.737295 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:41:08 crc kubenswrapper[4687]: I0314 09:41:08.738178 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:41:08 crc kubenswrapper[4687]: E0314 09:41:08.738315 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:41:08 crc kubenswrapper[4687]: E0314 09:41:08.738442 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:41:11 crc kubenswrapper[4687]: I0314 09:41:11.314706 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:11 crc kubenswrapper[4687]: I0314 09:41:11.366318 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:11 crc kubenswrapper[4687]: I0314 09:41:11.550012 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.383214 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbf7f" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="registry-server" containerID="cri-o://33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe" gracePeriod=2 Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.818326 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.991470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhrhb\" (UniqueName: \"kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb\") pod \"67375cba-bd9d-428c-b3be-78ba0b716ed4\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.991823 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities\") pod \"67375cba-bd9d-428c-b3be-78ba0b716ed4\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.991878 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content\") pod \"67375cba-bd9d-428c-b3be-78ba0b716ed4\" (UID: \"67375cba-bd9d-428c-b3be-78ba0b716ed4\") " Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.992719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities" (OuterVolumeSpecName: "utilities") pod "67375cba-bd9d-428c-b3be-78ba0b716ed4" (UID: "67375cba-bd9d-428c-b3be-78ba0b716ed4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:41:12 crc kubenswrapper[4687]: I0314 09:41:12.998283 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb" (OuterVolumeSpecName: "kube-api-access-bhrhb") pod "67375cba-bd9d-428c-b3be-78ba0b716ed4" (UID: "67375cba-bd9d-428c-b3be-78ba0b716ed4"). InnerVolumeSpecName "kube-api-access-bhrhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.093729 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.093758 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhrhb\" (UniqueName: \"kubernetes.io/projected/67375cba-bd9d-428c-b3be-78ba0b716ed4-kube-api-access-bhrhb\") on node \"crc\" DevicePath \"\"" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.127601 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67375cba-bd9d-428c-b3be-78ba0b716ed4" (UID: "67375cba-bd9d-428c-b3be-78ba0b716ed4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.196614 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67375cba-bd9d-428c-b3be-78ba0b716ed4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.392844 4687 generic.go:334] "Generic (PLEG): container finished" podID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerID="33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe" exitCode=0 Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.392890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerDied","Data":"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe"} Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.392921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbf7f" event={"ID":"67375cba-bd9d-428c-b3be-78ba0b716ed4","Type":"ContainerDied","Data":"4a4561fecb91b5f1ef08608bec4fed3cd1f3f47834f4b360cb6752a7cbbf238a"} Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.392943 4687 scope.go:117] "RemoveContainer" containerID="33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.393053 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbf7f" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.422684 4687 scope.go:117] "RemoveContainer" containerID="f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.431361 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.439257 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbf7f"] Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.443451 4687 scope.go:117] "RemoveContainer" containerID="7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.506442 4687 scope.go:117] "RemoveContainer" containerID="33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe" Mar 14 09:41:13 crc kubenswrapper[4687]: E0314 09:41:13.506989 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe\": container with ID starting with 33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe not found: ID does not exist" containerID="33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.507162 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe"} err="failed to get container status \"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe\": rpc error: code = NotFound desc = could not find container \"33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe\": container with ID starting with 33a80b132699c5347a5894f7f86938b414bb9932f5b9567a6f76b68db02585fe not found: ID does not exist" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.507196 4687 scope.go:117] "RemoveContainer" containerID="f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e" Mar 14 09:41:13 crc kubenswrapper[4687]: E0314 09:41:13.508655 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e\": container with ID starting with f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e not found: ID does not exist" containerID="f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.508689 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e"} err="failed to get container status \"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e\": rpc error: code = NotFound desc = could not find container \"f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e\": container with ID starting with f56325827b8769629eddcba01d229451abc729bce901c6ef5a55963a2df3289e not found: ID does not exist" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.508710 4687 scope.go:117] "RemoveContainer" containerID="7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c" Mar 14 09:41:13 crc kubenswrapper[4687]: E0314 09:41:13.509000 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c\": container with ID starting with 7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c not found: ID does not exist" containerID="7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.509020 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c"} err="failed to get container status \"7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c\": rpc error: code = NotFound desc = could not find container \"7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c\": container with ID starting with 7116f1bb1e266aac8d6370cba125f8ac4d45daef84bbd8f2c5500ac5af32fc2c not found: ID does not exist" Mar 14 09:41:13 crc kubenswrapper[4687]: I0314 09:41:13.749096 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" path="/var/lib/kubelet/pods/67375cba-bd9d-428c-b3be-78ba0b716ed4/volumes" Mar 14 09:41:20 crc kubenswrapper[4687]: I0314 09:41:20.737385 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:41:20 crc kubenswrapper[4687]: I0314 09:41:20.738019 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:41:20 crc kubenswrapper[4687]: E0314 09:41:20.738229 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:41:20 crc kubenswrapper[4687]: E0314 09:41:20.738265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:41:33 crc kubenswrapper[4687]: I0314 09:41:33.736929 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:41:33 crc kubenswrapper[4687]: I0314 09:41:33.737427 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:41:33 crc kubenswrapper[4687]: E0314 09:41:33.737608 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:41:33 crc kubenswrapper[4687]: E0314 09:41:33.737608 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:41:46 crc kubenswrapper[4687]: I0314 09:41:46.736942 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:41:46 crc kubenswrapper[4687]: E0314 09:41:46.737682 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:41:47 crc kubenswrapper[4687]: I0314 09:41:47.756268 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:41:48 crc kubenswrapper[4687]: I0314 09:41:48.718603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347"} Mar 14 09:41:52 crc kubenswrapper[4687]: I0314 09:41:52.219800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:41:52 crc kubenswrapper[4687]: I0314 09:41:52.220236 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:41:56 crc kubenswrapper[4687]: I0314 09:41:56.801452 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" exitCode=1 Mar 14 09:41:56 crc kubenswrapper[4687]: I0314 09:41:56.801508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347"} Mar 14 09:41:56 crc kubenswrapper[4687]: I0314 09:41:56.802117 4687 scope.go:117] "RemoveContainer" containerID="f653263aacb3330477d198c20c38b9fb9e08dad7209622c6386a7a19f5ad02bf" Mar 14 09:41:56 crc kubenswrapper[4687]: I0314 09:41:56.802926 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:41:56 crc kubenswrapper[4687]: E0314 09:41:56.803223 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:41:57 crc kubenswrapper[4687]: I0314 09:41:57.739364 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:41:57 crc kubenswrapper[4687]: E0314 09:41:57.739651 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.140228 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558022-k7pr2"] Mar 14 09:42:00 crc kubenswrapper[4687]: E0314 09:42:00.141032 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="registry-server" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.141052 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="registry-server" Mar 14 09:42:00 crc kubenswrapper[4687]: E0314 09:42:00.141082 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="extract-utilities" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.141091 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="extract-utilities" Mar 14 09:42:00 crc kubenswrapper[4687]: E0314 09:42:00.141123 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="extract-content" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.141130 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="extract-content" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.141389 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="67375cba-bd9d-428c-b3be-78ba0b716ed4" containerName="registry-server" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.142110 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.145650 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.145690 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.145773 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.150444 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-k7pr2"] Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.276681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcxw6\" (UniqueName: \"kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6\") pod \"auto-csr-approver-29558022-k7pr2\" (UID: \"7994d09b-32ad-4f22-a917-1b1aeb49e4c3\") " pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.378447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcxw6\" (UniqueName: \"kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6\") pod \"auto-csr-approver-29558022-k7pr2\" (UID: \"7994d09b-32ad-4f22-a917-1b1aeb49e4c3\") " pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.400442 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcxw6\" (UniqueName: \"kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6\") pod \"auto-csr-approver-29558022-k7pr2\" (UID: \"7994d09b-32ad-4f22-a917-1b1aeb49e4c3\") " pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.461647 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:00 crc kubenswrapper[4687]: I0314 09:42:00.924097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-k7pr2"] Mar 14 09:42:01 crc kubenswrapper[4687]: I0314 09:42:01.847411 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" event={"ID":"7994d09b-32ad-4f22-a917-1b1aeb49e4c3","Type":"ContainerStarted","Data":"674e9925703767a22c0613b094a6e899d7713154b9a87d2144a5fbae1b146e28"} Mar 14 09:42:02 crc kubenswrapper[4687]: I0314 09:42:02.220720 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:42:02 crc kubenswrapper[4687]: I0314 09:42:02.220796 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:42:02 crc kubenswrapper[4687]: I0314 09:42:02.221633 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:42:02 crc kubenswrapper[4687]: E0314 09:42:02.221980 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:42:02 crc kubenswrapper[4687]: I0314 09:42:02.857754 4687 generic.go:334] "Generic (PLEG): container finished" podID="7994d09b-32ad-4f22-a917-1b1aeb49e4c3" containerID="18d44d91332442569f3a0c688efeeed16158d071a6f46f7c3d3db1e407bc5b9f" exitCode=0 Mar 14 09:42:02 crc kubenswrapper[4687]: I0314 09:42:02.857812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" event={"ID":"7994d09b-32ad-4f22-a917-1b1aeb49e4c3","Type":"ContainerDied","Data":"18d44d91332442569f3a0c688efeeed16158d071a6f46f7c3d3db1e407bc5b9f"} Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.194536 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.362223 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcxw6\" (UniqueName: \"kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6\") pod \"7994d09b-32ad-4f22-a917-1b1aeb49e4c3\" (UID: \"7994d09b-32ad-4f22-a917-1b1aeb49e4c3\") " Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.368071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6" (OuterVolumeSpecName: "kube-api-access-mcxw6") pod "7994d09b-32ad-4f22-a917-1b1aeb49e4c3" (UID: "7994d09b-32ad-4f22-a917-1b1aeb49e4c3"). InnerVolumeSpecName "kube-api-access-mcxw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.464545 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcxw6\" (UniqueName: \"kubernetes.io/projected/7994d09b-32ad-4f22-a917-1b1aeb49e4c3-kube-api-access-mcxw6\") on node \"crc\" DevicePath \"\"" Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.906039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" event={"ID":"7994d09b-32ad-4f22-a917-1b1aeb49e4c3","Type":"ContainerDied","Data":"674e9925703767a22c0613b094a6e899d7713154b9a87d2144a5fbae1b146e28"} Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.906128 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674e9925703767a22c0613b094a6e899d7713154b9a87d2144a5fbae1b146e28" Mar 14 09:42:04 crc kubenswrapper[4687]: I0314 09:42:04.906265 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-k7pr2" Mar 14 09:42:05 crc kubenswrapper[4687]: I0314 09:42:05.264072 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-hbxxp"] Mar 14 09:42:05 crc kubenswrapper[4687]: I0314 09:42:05.273919 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-hbxxp"] Mar 14 09:42:05 crc kubenswrapper[4687]: I0314 09:42:05.748680 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb89acd-29c7-484a-915a-5d8df021e544" path="/var/lib/kubelet/pods/2cb89acd-29c7-484a-915a-5d8df021e544/volumes" Mar 14 09:42:12 crc kubenswrapper[4687]: I0314 09:42:12.737115 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:42:13 crc kubenswrapper[4687]: I0314 09:42:13.013247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f"} Mar 14 09:42:13 crc kubenswrapper[4687]: I0314 09:42:13.739852 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:42:13 crc kubenswrapper[4687]: E0314 09:42:13.741163 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:42:21 crc kubenswrapper[4687]: I0314 09:42:21.093044 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" exitCode=1 Mar 14 09:42:21 crc kubenswrapper[4687]: I0314 09:42:21.093107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f"} Mar 14 09:42:21 crc kubenswrapper[4687]: I0314 09:42:21.093666 4687 scope.go:117] "RemoveContainer" containerID="24130e9230dec7676cc387898ddaa2d7915261cf23a6ae82119394b03d33619a" Mar 14 09:42:21 crc kubenswrapper[4687]: I0314 09:42:21.094551 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:42:21 crc kubenswrapper[4687]: E0314 09:42:21.094938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:42:22 crc kubenswrapper[4687]: I0314 09:42:22.128405 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:42:22 crc kubenswrapper[4687]: I0314 09:42:22.128448 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:42:22 crc kubenswrapper[4687]: I0314 09:42:22.128459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:42:22 crc kubenswrapper[4687]: I0314 09:42:22.128466 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:42:22 crc kubenswrapper[4687]: I0314 09:42:22.128863 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:42:22 crc kubenswrapper[4687]: E0314 09:42:22.129134 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:42:24 crc kubenswrapper[4687]: I0314 09:42:24.736939 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:42:24 crc kubenswrapper[4687]: E0314 09:42:24.737470 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:42:29 crc kubenswrapper[4687]: I0314 09:42:29.513038 4687 scope.go:117] "RemoveContainer" containerID="1b4d623566169f09e68dfc4ab0fc384e0e786b6efe1d8504d0ce5ac1d12d25ec" Mar 14 09:42:35 crc kubenswrapper[4687]: I0314 09:42:35.745972 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:42:35 crc kubenswrapper[4687]: E0314 09:42:35.746522 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:42:36 crc kubenswrapper[4687]: I0314 09:42:36.737422 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:42:36 crc kubenswrapper[4687]: E0314 09:42:36.737709 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:42:48 crc kubenswrapper[4687]: I0314 09:42:48.737081 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:42:48 crc kubenswrapper[4687]: E0314 09:42:48.737837 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:42:51 crc kubenswrapper[4687]: I0314 09:42:51.737633 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:42:51 crc kubenswrapper[4687]: E0314 09:42:51.738105 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:42:54 crc kubenswrapper[4687]: I0314 09:42:54.110992 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:42:54 crc kubenswrapper[4687]: I0314 09:42:54.111361 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:42:57 crc kubenswrapper[4687]: I0314 09:42:57.949211 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:42:57 crc kubenswrapper[4687]: E0314 09:42:57.950531 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7994d09b-32ad-4f22-a917-1b1aeb49e4c3" containerName="oc" Mar 14 09:42:57 crc kubenswrapper[4687]: I0314 09:42:57.950549 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7994d09b-32ad-4f22-a917-1b1aeb49e4c3" containerName="oc" Mar 14 09:42:57 crc kubenswrapper[4687]: I0314 09:42:57.950946 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7994d09b-32ad-4f22-a917-1b1aeb49e4c3" containerName="oc" Mar 14 09:42:57 crc kubenswrapper[4687]: I0314 09:42:57.965233 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.002257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.091911 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcckj\" (UniqueName: \"kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.092227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.092379 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.194596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcckj\" (UniqueName: \"kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.194716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.194796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.195234 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.195554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.215982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcckj\" (UniqueName: \"kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj\") pod \"certified-operators-jmlxs\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.318612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:42:58 crc kubenswrapper[4687]: I0314 09:42:58.837839 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:42:59 crc kubenswrapper[4687]: I0314 09:42:59.458963 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerID="47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52" exitCode=0 Mar 14 09:42:59 crc kubenswrapper[4687]: I0314 09:42:59.459011 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerDied","Data":"47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52"} Mar 14 09:42:59 crc kubenswrapper[4687]: I0314 09:42:59.459061 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerStarted","Data":"331f97f174dc023f9bca3ff1dc8833711ee10b107e2ac5b5696549d758d7fa85"} Mar 14 09:43:00 crc kubenswrapper[4687]: I0314 09:43:00.469040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerStarted","Data":"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f"} Mar 14 09:43:01 crc kubenswrapper[4687]: I0314 09:43:01.478808 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerID="97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f" exitCode=0 Mar 14 09:43:01 crc kubenswrapper[4687]: I0314 09:43:01.478881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerDied","Data":"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f"} Mar 14 09:43:01 crc kubenswrapper[4687]: I0314 09:43:01.737299 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:43:01 crc kubenswrapper[4687]: E0314 09:43:01.737692 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:43:02 crc kubenswrapper[4687]: I0314 09:43:02.494969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerStarted","Data":"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890"} Mar 14 09:43:02 crc kubenswrapper[4687]: I0314 09:43:02.517729 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jmlxs" podStartSLOduration=3.1261745850000002 podStartE2EDuration="5.517710887s" podCreationTimestamp="2026-03-14 09:42:57 +0000 UTC" firstStartedPulling="2026-03-14 09:42:59.461486475 +0000 UTC m=+2764.449726850" lastFinishedPulling="2026-03-14 09:43:01.853022777 +0000 UTC m=+2766.841263152" observedRunningTime="2026-03-14 09:43:02.511608177 +0000 UTC m=+2767.499848562" watchObservedRunningTime="2026-03-14 09:43:02.517710887 +0000 UTC m=+2767.505951262" Mar 14 09:43:06 crc kubenswrapper[4687]: I0314 09:43:06.737392 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:43:06 crc kubenswrapper[4687]: E0314 09:43:06.738314 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:43:08 crc kubenswrapper[4687]: I0314 09:43:08.319039 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:08 crc kubenswrapper[4687]: I0314 09:43:08.319381 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:08 crc kubenswrapper[4687]: I0314 09:43:08.368866 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:08 crc kubenswrapper[4687]: I0314 09:43:08.594759 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:08 crc kubenswrapper[4687]: I0314 09:43:08.642236 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:43:10 crc kubenswrapper[4687]: I0314 09:43:10.570765 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jmlxs" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="registry-server" containerID="cri-o://c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890" gracePeriod=2 Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.003665 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.158248 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcckj\" (UniqueName: \"kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj\") pod \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.158612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities\") pod \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.158659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content\") pod \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\" (UID: \"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f\") " Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.159323 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities" (OuterVolumeSpecName: "utilities") pod "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" (UID: "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.165743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj" (OuterVolumeSpecName: "kube-api-access-fcckj") pod "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" (UID: "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f"). InnerVolumeSpecName "kube-api-access-fcckj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.211284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" (UID: "c3b2dfc9-f7b9-43e1-b17f-afb27020f33f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.261396 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.261449 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcckj\" (UniqueName: \"kubernetes.io/projected/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-kube-api-access-fcckj\") on node \"crc\" DevicePath \"\"" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.261467 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.584153 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerID="c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890" exitCode=0 Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.584199 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerDied","Data":"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890"} Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.584227 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmlxs" event={"ID":"c3b2dfc9-f7b9-43e1-b17f-afb27020f33f","Type":"ContainerDied","Data":"331f97f174dc023f9bca3ff1dc8833711ee10b107e2ac5b5696549d758d7fa85"} Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.584247 4687 scope.go:117] "RemoveContainer" containerID="c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.584453 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmlxs" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.615413 4687 scope.go:117] "RemoveContainer" containerID="97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.625672 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.636771 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jmlxs"] Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.653885 4687 scope.go:117] "RemoveContainer" containerID="47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.688034 4687 scope.go:117] "RemoveContainer" containerID="c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890" Mar 14 09:43:11 crc kubenswrapper[4687]: E0314 09:43:11.688769 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890\": container with ID starting with c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890 not found: ID does not exist" containerID="c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.688835 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890"} err="failed to get container status \"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890\": rpc error: code = NotFound desc = could not find container \"c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890\": container with ID starting with c33139ff4f37bc0d74e7385d0fe59c269a519a6506aefa2dbf9b0ddcb6708890 not found: ID does not exist" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.688872 4687 scope.go:117] "RemoveContainer" containerID="97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f" Mar 14 09:43:11 crc kubenswrapper[4687]: E0314 09:43:11.689216 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f\": container with ID starting with 97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f not found: ID does not exist" containerID="97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.689266 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f"} err="failed to get container status \"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f\": rpc error: code = NotFound desc = could not find container \"97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f\": container with ID starting with 97587f2a926d1650a010ad6ed960be476b5215762064f467ba89a87b7256836f not found: ID does not exist" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.689300 4687 scope.go:117] "RemoveContainer" containerID="47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52" Mar 14 09:43:11 crc kubenswrapper[4687]: E0314 09:43:11.689825 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52\": container with ID starting with 47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52 not found: ID does not exist" containerID="47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.689860 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52"} err="failed to get container status \"47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52\": rpc error: code = NotFound desc = could not find container \"47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52\": container with ID starting with 47783c5e06ed19605da1fae798f8bf8e45f46d4d61e8ba1224d167c414b38c52 not found: ID does not exist" Mar 14 09:43:11 crc kubenswrapper[4687]: I0314 09:43:11.747564 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" path="/var/lib/kubelet/pods/c3b2dfc9-f7b9-43e1-b17f-afb27020f33f/volumes" Mar 14 09:43:12 crc kubenswrapper[4687]: I0314 09:43:12.737618 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:43:12 crc kubenswrapper[4687]: E0314 09:43:12.738405 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:43:21 crc kubenswrapper[4687]: I0314 09:43:21.737210 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:43:21 crc kubenswrapper[4687]: E0314 09:43:21.737912 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:43:24 crc kubenswrapper[4687]: I0314 09:43:24.111298 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:43:24 crc kubenswrapper[4687]: I0314 09:43:24.111634 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:43:24 crc kubenswrapper[4687]: I0314 09:43:24.736750 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:43:24 crc kubenswrapper[4687]: E0314 09:43:24.737286 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:43:36 crc kubenswrapper[4687]: I0314 09:43:36.736878 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:43:36 crc kubenswrapper[4687]: E0314 09:43:36.737591 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:43:38 crc kubenswrapper[4687]: I0314 09:43:38.737438 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:43:38 crc kubenswrapper[4687]: E0314 09:43:38.738197 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:43:49 crc kubenswrapper[4687]: I0314 09:43:49.737721 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:43:49 crc kubenswrapper[4687]: E0314 09:43:49.738525 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:43:50 crc kubenswrapper[4687]: I0314 09:43:50.737671 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:43:50 crc kubenswrapper[4687]: E0314 09:43:50.737945 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.111466 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.112032 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.112072 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.113284 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.113350 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5" gracePeriod=600 Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.970410 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5" exitCode=0 Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.970486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5"} Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.970925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d"} Mar 14 09:43:54 crc kubenswrapper[4687]: I0314 09:43:54.970950 4687 scope.go:117] "RemoveContainer" containerID="2a7375a5f2e11d08390ab5127e282f2e8083d492710345c9f80321074add9267" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.136934 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558024-wcl6r"] Mar 14 09:44:00 crc kubenswrapper[4687]: E0314 09:44:00.137913 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="registry-server" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.137927 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="registry-server" Mar 14 09:44:00 crc kubenswrapper[4687]: E0314 09:44:00.137962 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="extract-utilities" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.137968 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="extract-utilities" Mar 14 09:44:00 crc kubenswrapper[4687]: E0314 09:44:00.137984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="extract-content" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.137991 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="extract-content" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.138201 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b2dfc9-f7b9-43e1-b17f-afb27020f33f" containerName="registry-server" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.138878 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.141984 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.143180 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.143509 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.153682 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-wcl6r"] Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.238998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqhjr\" (UniqueName: \"kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr\") pod \"auto-csr-approver-29558024-wcl6r\" (UID: \"e024b5b0-c2cf-479c-843f-f25b799a95f9\") " pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.340827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqhjr\" (UniqueName: \"kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr\") pod \"auto-csr-approver-29558024-wcl6r\" (UID: \"e024b5b0-c2cf-479c-843f-f25b799a95f9\") " pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.363759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqhjr\" (UniqueName: \"kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr\") pod \"auto-csr-approver-29558024-wcl6r\" (UID: \"e024b5b0-c2cf-479c-843f-f25b799a95f9\") " pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.460920 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:00 crc kubenswrapper[4687]: I0314 09:44:00.940050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-wcl6r"] Mar 14 09:44:01 crc kubenswrapper[4687]: I0314 09:44:01.027417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" event={"ID":"e024b5b0-c2cf-479c-843f-f25b799a95f9","Type":"ContainerStarted","Data":"2e7caf6201567d05d28fcfc02b04d324f3a32626e31b208ad6297ba1cfaa52d9"} Mar 14 09:44:01 crc kubenswrapper[4687]: I0314 09:44:01.737116 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:44:01 crc kubenswrapper[4687]: E0314 09:44:01.737423 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:44:02 crc kubenswrapper[4687]: I0314 09:44:02.737278 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:44:02 crc kubenswrapper[4687]: E0314 09:44:02.737770 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:44:03 crc kubenswrapper[4687]: I0314 09:44:03.046948 4687 generic.go:334] "Generic (PLEG): container finished" podID="e024b5b0-c2cf-479c-843f-f25b799a95f9" containerID="b1bbcde0e140286a6a38dc3ccada25744322c391bfa0791bb3371804bceb85b6" exitCode=0 Mar 14 09:44:03 crc kubenswrapper[4687]: I0314 09:44:03.046997 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" event={"ID":"e024b5b0-c2cf-479c-843f-f25b799a95f9","Type":"ContainerDied","Data":"b1bbcde0e140286a6a38dc3ccada25744322c391bfa0791bb3371804bceb85b6"} Mar 14 09:44:04 crc kubenswrapper[4687]: I0314 09:44:04.453966 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:04 crc kubenswrapper[4687]: I0314 09:44:04.532410 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqhjr\" (UniqueName: \"kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr\") pod \"e024b5b0-c2cf-479c-843f-f25b799a95f9\" (UID: \"e024b5b0-c2cf-479c-843f-f25b799a95f9\") " Mar 14 09:44:04 crc kubenswrapper[4687]: I0314 09:44:04.538500 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr" (OuterVolumeSpecName: "kube-api-access-dqhjr") pod "e024b5b0-c2cf-479c-843f-f25b799a95f9" (UID: "e024b5b0-c2cf-479c-843f-f25b799a95f9"). InnerVolumeSpecName "kube-api-access-dqhjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:44:04 crc kubenswrapper[4687]: I0314 09:44:04.635564 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqhjr\" (UniqueName: \"kubernetes.io/projected/e024b5b0-c2cf-479c-843f-f25b799a95f9-kube-api-access-dqhjr\") on node \"crc\" DevicePath \"\"" Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.067790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" event={"ID":"e024b5b0-c2cf-479c-843f-f25b799a95f9","Type":"ContainerDied","Data":"2e7caf6201567d05d28fcfc02b04d324f3a32626e31b208ad6297ba1cfaa52d9"} Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.067836 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7caf6201567d05d28fcfc02b04d324f3a32626e31b208ad6297ba1cfaa52d9" Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.067868 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-wcl6r" Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.517365 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-jnzlz"] Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.525098 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-jnzlz"] Mar 14 09:44:05 crc kubenswrapper[4687]: I0314 09:44:05.747205 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ee5faf-bda4-447d-a5f8-ae23e22a75d4" path="/var/lib/kubelet/pods/11ee5faf-bda4-447d-a5f8-ae23e22a75d4/volumes" Mar 14 09:44:13 crc kubenswrapper[4687]: I0314 09:44:13.736926 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:44:13 crc kubenswrapper[4687]: I0314 09:44:13.737403 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:44:13 crc kubenswrapper[4687]: E0314 09:44:13.737531 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:44:13 crc kubenswrapper[4687]: E0314 09:44:13.737667 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:44:25 crc kubenswrapper[4687]: I0314 09:44:25.742527 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:44:25 crc kubenswrapper[4687]: E0314 09:44:25.743327 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:44:28 crc kubenswrapper[4687]: I0314 09:44:28.737084 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:44:28 crc kubenswrapper[4687]: E0314 09:44:28.738095 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:44:29 crc kubenswrapper[4687]: I0314 09:44:29.630276 4687 scope.go:117] "RemoveContainer" containerID="d80a220eca2f1b256e44efbf0dda2c498c9634b5027dfe8f4d52b47a2a695e54" Mar 14 09:44:37 crc kubenswrapper[4687]: I0314 09:44:37.737159 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:44:37 crc kubenswrapper[4687]: E0314 09:44:37.737839 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:44:42 crc kubenswrapper[4687]: I0314 09:44:42.737136 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:44:42 crc kubenswrapper[4687]: E0314 09:44:42.738908 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:44:51 crc kubenswrapper[4687]: I0314 09:44:51.737662 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:44:51 crc kubenswrapper[4687]: E0314 09:44:51.738310 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:44:54 crc kubenswrapper[4687]: I0314 09:44:54.737615 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:44:54 crc kubenswrapper[4687]: E0314 09:44:54.738146 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.147187 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp"] Mar 14 09:45:00 crc kubenswrapper[4687]: E0314 09:45:00.148179 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e024b5b0-c2cf-479c-843f-f25b799a95f9" containerName="oc" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.148196 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e024b5b0-c2cf-479c-843f-f25b799a95f9" containerName="oc" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.148405 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e024b5b0-c2cf-479c-843f-f25b799a95f9" containerName="oc" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.149108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.151986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.152924 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.155117 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp"] Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.160751 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.171407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.174477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7jn\" (UniqueName: \"kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.276439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.276527 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7jn\" (UniqueName: \"kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.276658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.277667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.283981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.293469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7jn\" (UniqueName: \"kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn\") pod \"collect-profiles-29558025-tskhp\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.467015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:00 crc kubenswrapper[4687]: I0314 09:45:00.919726 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp"] Mar 14 09:45:01 crc kubenswrapper[4687]: I0314 09:45:01.584550 4687 generic.go:334] "Generic (PLEG): container finished" podID="0e60c7b4-f36b-445a-8227-db71339c9c03" containerID="9391ea68ff30018fd8d9d7a32de560a016b42ee8f62013df5fe2c65d1c6f97f3" exitCode=0 Mar 14 09:45:01 crc kubenswrapper[4687]: I0314 09:45:01.585252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" event={"ID":"0e60c7b4-f36b-445a-8227-db71339c9c03","Type":"ContainerDied","Data":"9391ea68ff30018fd8d9d7a32de560a016b42ee8f62013df5fe2c65d1c6f97f3"} Mar 14 09:45:01 crc kubenswrapper[4687]: I0314 09:45:01.585294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" event={"ID":"0e60c7b4-f36b-445a-8227-db71339c9c03","Type":"ContainerStarted","Data":"acea9beacb1e8587b750cd64b8dd9938b7c07bda0e294d72aa76e28710a79639"} Mar 14 09:45:02 crc kubenswrapper[4687]: I0314 09:45:02.940705 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.037713 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume\") pod \"0e60c7b4-f36b-445a-8227-db71339c9c03\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.038050 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm7jn\" (UniqueName: \"kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn\") pod \"0e60c7b4-f36b-445a-8227-db71339c9c03\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.038081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume\") pod \"0e60c7b4-f36b-445a-8227-db71339c9c03\" (UID: \"0e60c7b4-f36b-445a-8227-db71339c9c03\") " Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.038396 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e60c7b4-f36b-445a-8227-db71339c9c03" (UID: "0e60c7b4-f36b-445a-8227-db71339c9c03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.038589 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e60c7b4-f36b-445a-8227-db71339c9c03-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.044487 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn" (OuterVolumeSpecName: "kube-api-access-jm7jn") pod "0e60c7b4-f36b-445a-8227-db71339c9c03" (UID: "0e60c7b4-f36b-445a-8227-db71339c9c03"). InnerVolumeSpecName "kube-api-access-jm7jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.044496 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e60c7b4-f36b-445a-8227-db71339c9c03" (UID: "0e60c7b4-f36b-445a-8227-db71339c9c03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.140611 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm7jn\" (UniqueName: \"kubernetes.io/projected/0e60c7b4-f36b-445a-8227-db71339c9c03-kube-api-access-jm7jn\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.140654 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e60c7b4-f36b-445a-8227-db71339c9c03-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.606024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" event={"ID":"0e60c7b4-f36b-445a-8227-db71339c9c03","Type":"ContainerDied","Data":"acea9beacb1e8587b750cd64b8dd9938b7c07bda0e294d72aa76e28710a79639"} Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.606487 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acea9beacb1e8587b750cd64b8dd9938b7c07bda0e294d72aa76e28710a79639" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.606108 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-tskhp" Mar 14 09:45:03 crc kubenswrapper[4687]: I0314 09:45:03.738825 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:45:03 crc kubenswrapper[4687]: E0314 09:45:03.739841 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:45:04 crc kubenswrapper[4687]: I0314 09:45:04.033481 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx"] Mar 14 09:45:04 crc kubenswrapper[4687]: I0314 09:45:04.043353 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-l5tgx"] Mar 14 09:45:05 crc kubenswrapper[4687]: I0314 09:45:05.747931 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca59e50-a6ce-4f7f-a81d-1e60677a7ac8" path="/var/lib/kubelet/pods/eca59e50-a6ce-4f7f-a81d-1e60677a7ac8/volumes" Mar 14 09:45:09 crc kubenswrapper[4687]: I0314 09:45:09.738446 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:45:09 crc kubenswrapper[4687]: E0314 09:45:09.739188 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.661519 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:11 crc kubenswrapper[4687]: E0314 09:45:11.662754 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e60c7b4-f36b-445a-8227-db71339c9c03" containerName="collect-profiles" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.662771 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e60c7b4-f36b-445a-8227-db71339c9c03" containerName="collect-profiles" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.663244 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e60c7b4-f36b-445a-8227-db71339c9c03" containerName="collect-profiles" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.664907 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.676478 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.737427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.737508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.737640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fz66\" (UniqueName: \"kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.839566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fz66\" (UniqueName: \"kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.839746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.839781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.840476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.840771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.862947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fz66\" (UniqueName: \"kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66\") pod \"redhat-marketplace-x8hnq\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:11 crc kubenswrapper[4687]: I0314 09:45:11.997777 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:12 crc kubenswrapper[4687]: I0314 09:45:12.451101 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:12 crc kubenswrapper[4687]: I0314 09:45:12.685073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerStarted","Data":"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284"} Mar 14 09:45:12 crc kubenswrapper[4687]: I0314 09:45:12.685117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerStarted","Data":"661d4289e99396dcaa6c4f0546c4b754c063ded8effa3a8cf3aaaecdd714046c"} Mar 14 09:45:13 crc kubenswrapper[4687]: I0314 09:45:13.694377 4687 generic.go:334] "Generic (PLEG): container finished" podID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerID="51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284" exitCode=0 Mar 14 09:45:13 crc kubenswrapper[4687]: I0314 09:45:13.694428 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerDied","Data":"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284"} Mar 14 09:45:13 crc kubenswrapper[4687]: I0314 09:45:13.697806 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:45:14 crc kubenswrapper[4687]: I0314 09:45:14.705589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerStarted","Data":"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123"} Mar 14 09:45:14 crc kubenswrapper[4687]: I0314 09:45:14.838486 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:14 crc kubenswrapper[4687]: I0314 09:45:14.840858 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:14 crc kubenswrapper[4687]: I0314 09:45:14.854710 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.000279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.000387 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.000450 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbfgb\" (UniqueName: \"kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.102746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.102830 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.102903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbfgb\" (UniqueName: \"kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.103316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.103384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.124884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbfgb\" (UniqueName: \"kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb\") pod \"community-operators-rbwmw\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.201791 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.717267 4687 generic.go:334] "Generic (PLEG): container finished" podID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerID="5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123" exitCode=0 Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.717550 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerDied","Data":"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123"} Mar 14 09:45:15 crc kubenswrapper[4687]: I0314 09:45:15.726567 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:15 crc kubenswrapper[4687]: W0314 09:45:15.730667 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43cbed6a_fcd4_41a0_89b3_b46f51f08524.slice/crio-5183a56d8b498caa3e9e961be37c1e4b2cc754f856a4cf474a392ae7cdf18c3d WatchSource:0}: Error finding container 5183a56d8b498caa3e9e961be37c1e4b2cc754f856a4cf474a392ae7cdf18c3d: Status 404 returned error can't find the container with id 5183a56d8b498caa3e9e961be37c1e4b2cc754f856a4cf474a392ae7cdf18c3d Mar 14 09:45:16 crc kubenswrapper[4687]: I0314 09:45:16.728185 4687 generic.go:334] "Generic (PLEG): container finished" podID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerID="1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f" exitCode=0 Mar 14 09:45:16 crc kubenswrapper[4687]: I0314 09:45:16.728270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerDied","Data":"1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f"} Mar 14 09:45:16 crc kubenswrapper[4687]: I0314 09:45:16.728576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerStarted","Data":"5183a56d8b498caa3e9e961be37c1e4b2cc754f856a4cf474a392ae7cdf18c3d"} Mar 14 09:45:16 crc kubenswrapper[4687]: I0314 09:45:16.731826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerStarted","Data":"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64"} Mar 14 09:45:16 crc kubenswrapper[4687]: I0314 09:45:16.799128 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x8hnq" podStartSLOduration=3.306786841 podStartE2EDuration="5.799105409s" podCreationTimestamp="2026-03-14 09:45:11 +0000 UTC" firstStartedPulling="2026-03-14 09:45:13.697542454 +0000 UTC m=+2898.685782829" lastFinishedPulling="2026-03-14 09:45:16.189861032 +0000 UTC m=+2901.178101397" observedRunningTime="2026-03-14 09:45:16.793163802 +0000 UTC m=+2901.781404187" watchObservedRunningTime="2026-03-14 09:45:16.799105409 +0000 UTC m=+2901.787345784" Mar 14 09:45:17 crc kubenswrapper[4687]: I0314 09:45:17.753704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerStarted","Data":"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9"} Mar 14 09:45:18 crc kubenswrapper[4687]: I0314 09:45:18.737482 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:45:18 crc kubenswrapper[4687]: E0314 09:45:18.738039 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:45:18 crc kubenswrapper[4687]: I0314 09:45:18.762958 4687 generic.go:334] "Generic (PLEG): container finished" podID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerID="8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9" exitCode=0 Mar 14 09:45:18 crc kubenswrapper[4687]: I0314 09:45:18.763005 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerDied","Data":"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9"} Mar 14 09:45:19 crc kubenswrapper[4687]: I0314 09:45:19.772927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerStarted","Data":"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b"} Mar 14 09:45:19 crc kubenswrapper[4687]: I0314 09:45:19.794442 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rbwmw" podStartSLOduration=3.327763367 podStartE2EDuration="5.794426495s" podCreationTimestamp="2026-03-14 09:45:14 +0000 UTC" firstStartedPulling="2026-03-14 09:45:16.730128099 +0000 UTC m=+2901.718368474" lastFinishedPulling="2026-03-14 09:45:19.196791227 +0000 UTC m=+2904.185031602" observedRunningTime="2026-03-14 09:45:19.793931934 +0000 UTC m=+2904.782172319" watchObservedRunningTime="2026-03-14 09:45:19.794426495 +0000 UTC m=+2904.782666870" Mar 14 09:45:21 crc kubenswrapper[4687]: I0314 09:45:21.997885 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:21 crc kubenswrapper[4687]: I0314 09:45:21.998186 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:22 crc kubenswrapper[4687]: I0314 09:45:22.055089 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:22 crc kubenswrapper[4687]: I0314 09:45:22.844543 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:23 crc kubenswrapper[4687]: I0314 09:45:23.632167 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:23 crc kubenswrapper[4687]: I0314 09:45:23.736658 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:45:23 crc kubenswrapper[4687]: E0314 09:45:23.737863 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:45:24 crc kubenswrapper[4687]: I0314 09:45:24.825791 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x8hnq" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="registry-server" containerID="cri-o://6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64" gracePeriod=2 Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.202563 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.205631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.255090 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.319405 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.401763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fz66\" (UniqueName: \"kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66\") pod \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.401849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content\") pod \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.402044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities\") pod \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\" (UID: \"9f0dc680-955a-4c10-95a1-ddd8a28bafab\") " Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.402749 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities" (OuterVolumeSpecName: "utilities") pod "9f0dc680-955a-4c10-95a1-ddd8a28bafab" (UID: "9f0dc680-955a-4c10-95a1-ddd8a28bafab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.403313 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.423627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66" (OuterVolumeSpecName: "kube-api-access-9fz66") pod "9f0dc680-955a-4c10-95a1-ddd8a28bafab" (UID: "9f0dc680-955a-4c10-95a1-ddd8a28bafab"). InnerVolumeSpecName "kube-api-access-9fz66". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.446928 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f0dc680-955a-4c10-95a1-ddd8a28bafab" (UID: "9f0dc680-955a-4c10-95a1-ddd8a28bafab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.504912 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fz66\" (UniqueName: \"kubernetes.io/projected/9f0dc680-955a-4c10-95a1-ddd8a28bafab-kube-api-access-9fz66\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.504960 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0dc680-955a-4c10-95a1-ddd8a28bafab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.835443 4687 generic.go:334] "Generic (PLEG): container finished" podID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerID="6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64" exitCode=0 Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.835518 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8hnq" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.836433 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerDied","Data":"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64"} Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.836566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8hnq" event={"ID":"9f0dc680-955a-4c10-95a1-ddd8a28bafab","Type":"ContainerDied","Data":"661d4289e99396dcaa6c4f0546c4b754c063ded8effa3a8cf3aaaecdd714046c"} Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.836607 4687 scope.go:117] "RemoveContainer" containerID="6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.859449 4687 scope.go:117] "RemoveContainer" containerID="5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.865679 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.875173 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8hnq"] Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.882165 4687 scope.go:117] "RemoveContainer" containerID="51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.885787 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.938440 4687 scope.go:117] "RemoveContainer" containerID="6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64" Mar 14 09:45:25 crc kubenswrapper[4687]: E0314 09:45:25.938878 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64\": container with ID starting with 6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64 not found: ID does not exist" containerID="6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.938927 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64"} err="failed to get container status \"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64\": rpc error: code = NotFound desc = could not find container \"6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64\": container with ID starting with 6b2f428e766adfa04299dbff0e846d4f0fc99eade940773bc3f1bbb885a7ab64 not found: ID does not exist" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.938958 4687 scope.go:117] "RemoveContainer" containerID="5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123" Mar 14 09:45:25 crc kubenswrapper[4687]: E0314 09:45:25.939324 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123\": container with ID starting with 5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123 not found: ID does not exist" containerID="5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.939382 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123"} err="failed to get container status \"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123\": rpc error: code = NotFound desc = could not find container \"5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123\": container with ID starting with 5d979edee0b42defc512a70d57ba6bf46117294177c825fe269460eb8e9df123 not found: ID does not exist" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.939408 4687 scope.go:117] "RemoveContainer" containerID="51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284" Mar 14 09:45:25 crc kubenswrapper[4687]: E0314 09:45:25.939640 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284\": container with ID starting with 51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284 not found: ID does not exist" containerID="51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284" Mar 14 09:45:25 crc kubenswrapper[4687]: I0314 09:45:25.939679 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284"} err="failed to get container status \"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284\": rpc error: code = NotFound desc = could not find container \"51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284\": container with ID starting with 51b54530ed49ddeab1c429aa963988a4213493870c62f28f7f8a82ca5bea8284 not found: ID does not exist" Mar 14 09:45:27 crc kubenswrapper[4687]: I0314 09:45:27.631419 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:27 crc kubenswrapper[4687]: I0314 09:45:27.757803 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" path="/var/lib/kubelet/pods/9f0dc680-955a-4c10-95a1-ddd8a28bafab/volumes" Mar 14 09:45:28 crc kubenswrapper[4687]: I0314 09:45:28.861301 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rbwmw" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="registry-server" containerID="cri-o://58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b" gracePeriod=2 Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.339770 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.380058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content\") pod \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.380513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbfgb\" (UniqueName: \"kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb\") pod \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.380576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities\") pod \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\" (UID: \"43cbed6a-fcd4-41a0-89b3-b46f51f08524\") " Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.381647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities" (OuterVolumeSpecName: "utilities") pod "43cbed6a-fcd4-41a0-89b3-b46f51f08524" (UID: "43cbed6a-fcd4-41a0-89b3-b46f51f08524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.391683 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb" (OuterVolumeSpecName: "kube-api-access-xbfgb") pod "43cbed6a-fcd4-41a0-89b3-b46f51f08524" (UID: "43cbed6a-fcd4-41a0-89b3-b46f51f08524"). InnerVolumeSpecName "kube-api-access-xbfgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.431763 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43cbed6a-fcd4-41a0-89b3-b46f51f08524" (UID: "43cbed6a-fcd4-41a0-89b3-b46f51f08524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.482641 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbfgb\" (UniqueName: \"kubernetes.io/projected/43cbed6a-fcd4-41a0-89b3-b46f51f08524-kube-api-access-xbfgb\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.482672 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.482681 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43cbed6a-fcd4-41a0-89b3-b46f51f08524-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.702514 4687 scope.go:117] "RemoveContainer" containerID="d0c8eb3db79fed9a7cd0ada8857577dd05021b0009e222c3d6273cf2522943c1" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.871128 4687 generic.go:334] "Generic (PLEG): container finished" podID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerID="58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b" exitCode=0 Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.871185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerDied","Data":"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b"} Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.871218 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbwmw" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.871244 4687 scope.go:117] "RemoveContainer" containerID="58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.871231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbwmw" event={"ID":"43cbed6a-fcd4-41a0-89b3-b46f51f08524","Type":"ContainerDied","Data":"5183a56d8b498caa3e9e961be37c1e4b2cc754f856a4cf474a392ae7cdf18c3d"} Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.910495 4687 scope.go:117] "RemoveContainer" containerID="8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.920827 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.938085 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rbwmw"] Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.966252 4687 scope.go:117] "RemoveContainer" containerID="1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.993606 4687 scope.go:117] "RemoveContainer" containerID="58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b" Mar 14 09:45:29 crc kubenswrapper[4687]: E0314 09:45:29.994115 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b\": container with ID starting with 58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b not found: ID does not exist" containerID="58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.994152 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b"} err="failed to get container status \"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b\": rpc error: code = NotFound desc = could not find container \"58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b\": container with ID starting with 58e7bc0acd99108fb673f88938d7f33f2d5affad153b6d96c86676811328122b not found: ID does not exist" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.994177 4687 scope.go:117] "RemoveContainer" containerID="8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9" Mar 14 09:45:29 crc kubenswrapper[4687]: E0314 09:45:29.994635 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9\": container with ID starting with 8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9 not found: ID does not exist" containerID="8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.994672 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9"} err="failed to get container status \"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9\": rpc error: code = NotFound desc = could not find container \"8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9\": container with ID starting with 8b7a65395719e80355f905693c5b0c8ec59e4491544e55e10a000381672a3fe9 not found: ID does not exist" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.994703 4687 scope.go:117] "RemoveContainer" containerID="1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f" Mar 14 09:45:29 crc kubenswrapper[4687]: E0314 09:45:29.995038 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f\": container with ID starting with 1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f not found: ID does not exist" containerID="1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f" Mar 14 09:45:29 crc kubenswrapper[4687]: I0314 09:45:29.995074 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f"} err="failed to get container status \"1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f\": rpc error: code = NotFound desc = could not find container \"1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f\": container with ID starting with 1bd3ef676fd8a6e2400d06100bbc73201d94e7ba2d2b201e0b694d4740580e1f not found: ID does not exist" Mar 14 09:45:31 crc kubenswrapper[4687]: I0314 09:45:31.737839 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:45:31 crc kubenswrapper[4687]: E0314 09:45:31.738495 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:45:31 crc kubenswrapper[4687]: I0314 09:45:31.751486 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" path="/var/lib/kubelet/pods/43cbed6a-fcd4-41a0-89b3-b46f51f08524/volumes" Mar 14 09:45:34 crc kubenswrapper[4687]: I0314 09:45:34.737628 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:45:34 crc kubenswrapper[4687]: E0314 09:45:34.738036 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:45:45 crc kubenswrapper[4687]: I0314 09:45:45.748964 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:45:45 crc kubenswrapper[4687]: E0314 09:45:45.749847 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:45:47 crc kubenswrapper[4687]: I0314 09:45:47.739425 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:45:47 crc kubenswrapper[4687]: E0314 09:45:47.739988 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:45:54 crc kubenswrapper[4687]: I0314 09:45:54.111196 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:45:54 crc kubenswrapper[4687]: I0314 09:45:54.111722 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:45:59 crc kubenswrapper[4687]: I0314 09:45:59.737426 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:45:59 crc kubenswrapper[4687]: E0314 09:45:59.738172 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.145051 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558026-dww2x"] Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.145879 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.145904 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.145931 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.145940 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.145958 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.145965 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.145984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.145992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.146006 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.146014 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4687]: E0314 09:46:00.146042 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.146049 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.146286 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="43cbed6a-fcd4-41a0-89b3-b46f51f08524" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.146316 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0dc680-955a-4c10-95a1-ddd8a28bafab" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.147163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.149832 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.150433 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.152474 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.155325 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-dww2x"] Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.198072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjt5\" (UniqueName: \"kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5\") pod \"auto-csr-approver-29558026-dww2x\" (UID: \"bb016512-0cc0-428e-a0d1-f1beee23e06f\") " pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.300823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjt5\" (UniqueName: \"kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5\") pod \"auto-csr-approver-29558026-dww2x\" (UID: \"bb016512-0cc0-428e-a0d1-f1beee23e06f\") " pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.321570 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjt5\" (UniqueName: \"kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5\") pod \"auto-csr-approver-29558026-dww2x\" (UID: \"bb016512-0cc0-428e-a0d1-f1beee23e06f\") " pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.470496 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:00 crc kubenswrapper[4687]: I0314 09:46:00.926786 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-dww2x"] Mar 14 09:46:01 crc kubenswrapper[4687]: I0314 09:46:01.137664 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-dww2x" event={"ID":"bb016512-0cc0-428e-a0d1-f1beee23e06f","Type":"ContainerStarted","Data":"27330e4fa5fd5c78f6235b83415013065c603366519adc1e358a248c13e14e8b"} Mar 14 09:46:01 crc kubenswrapper[4687]: I0314 09:46:01.738411 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:46:01 crc kubenswrapper[4687]: E0314 09:46:01.739529 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:46:02 crc kubenswrapper[4687]: I0314 09:46:02.147163 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-dww2x" event={"ID":"bb016512-0cc0-428e-a0d1-f1beee23e06f","Type":"ContainerStarted","Data":"aa5ce0dd42135800634d70c6a9ca3b7def144c9856a85cbff2ded7baf3d24c78"} Mar 14 09:46:02 crc kubenswrapper[4687]: I0314 09:46:02.164271 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558026-dww2x" podStartSLOduration=1.265704043 podStartE2EDuration="2.164246136s" podCreationTimestamp="2026-03-14 09:46:00 +0000 UTC" firstStartedPulling="2026-03-14 09:46:00.931515977 +0000 UTC m=+2945.919756352" lastFinishedPulling="2026-03-14 09:46:01.83005806 +0000 UTC m=+2946.818298445" observedRunningTime="2026-03-14 09:46:02.161035787 +0000 UTC m=+2947.149276162" watchObservedRunningTime="2026-03-14 09:46:02.164246136 +0000 UTC m=+2947.152486521" Mar 14 09:46:03 crc kubenswrapper[4687]: I0314 09:46:03.157274 4687 generic.go:334] "Generic (PLEG): container finished" podID="bb016512-0cc0-428e-a0d1-f1beee23e06f" containerID="aa5ce0dd42135800634d70c6a9ca3b7def144c9856a85cbff2ded7baf3d24c78" exitCode=0 Mar 14 09:46:03 crc kubenswrapper[4687]: I0314 09:46:03.157594 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-dww2x" event={"ID":"bb016512-0cc0-428e-a0d1-f1beee23e06f","Type":"ContainerDied","Data":"aa5ce0dd42135800634d70c6a9ca3b7def144c9856a85cbff2ded7baf3d24c78"} Mar 14 09:46:04 crc kubenswrapper[4687]: I0314 09:46:04.620150 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:04 crc kubenswrapper[4687]: I0314 09:46:04.690464 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjt5\" (UniqueName: \"kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5\") pod \"bb016512-0cc0-428e-a0d1-f1beee23e06f\" (UID: \"bb016512-0cc0-428e-a0d1-f1beee23e06f\") " Mar 14 09:46:04 crc kubenswrapper[4687]: I0314 09:46:04.696178 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5" (OuterVolumeSpecName: "kube-api-access-dhjt5") pod "bb016512-0cc0-428e-a0d1-f1beee23e06f" (UID: "bb016512-0cc0-428e-a0d1-f1beee23e06f"). InnerVolumeSpecName "kube-api-access-dhjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:46:04 crc kubenswrapper[4687]: I0314 09:46:04.793517 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjt5\" (UniqueName: \"kubernetes.io/projected/bb016512-0cc0-428e-a0d1-f1beee23e06f-kube-api-access-dhjt5\") on node \"crc\" DevicePath \"\"" Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.182295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-dww2x" event={"ID":"bb016512-0cc0-428e-a0d1-f1beee23e06f","Type":"ContainerDied","Data":"27330e4fa5fd5c78f6235b83415013065c603366519adc1e358a248c13e14e8b"} Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.182616 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27330e4fa5fd5c78f6235b83415013065c603366519adc1e358a248c13e14e8b" Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.182350 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-dww2x" Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.227816 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-kptwb"] Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.238103 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-kptwb"] Mar 14 09:46:05 crc kubenswrapper[4687]: I0314 09:46:05.750355 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac329c32-9aee-4197-bc3c-a9ecb12e057c" path="/var/lib/kubelet/pods/ac329c32-9aee-4197-bc3c-a9ecb12e057c/volumes" Mar 14 09:46:11 crc kubenswrapper[4687]: I0314 09:46:11.737886 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:46:11 crc kubenswrapper[4687]: E0314 09:46:11.740852 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:46:14 crc kubenswrapper[4687]: I0314 09:46:14.737779 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:46:14 crc kubenswrapper[4687]: E0314 09:46:14.739161 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:46:24 crc kubenswrapper[4687]: I0314 09:46:24.111194 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:46:24 crc kubenswrapper[4687]: I0314 09:46:24.111954 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:46:24 crc kubenswrapper[4687]: I0314 09:46:24.737047 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:46:24 crc kubenswrapper[4687]: E0314 09:46:24.737445 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:46:25 crc kubenswrapper[4687]: I0314 09:46:25.748928 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:46:25 crc kubenswrapper[4687]: E0314 09:46:25.749377 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:46:29 crc kubenswrapper[4687]: I0314 09:46:29.775431 4687 scope.go:117] "RemoveContainer" containerID="036cef85ef83953c4bc8aca8324062f5568c3d17dc4f6d23ad78452af9ca072f" Mar 14 09:46:38 crc kubenswrapper[4687]: I0314 09:46:38.737649 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:46:38 crc kubenswrapper[4687]: E0314 09:46:38.738314 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:46:39 crc kubenswrapper[4687]: I0314 09:46:39.737494 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:46:39 crc kubenswrapper[4687]: E0314 09:46:39.737739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:46:51 crc kubenswrapper[4687]: I0314 09:46:51.736812 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:46:51 crc kubenswrapper[4687]: E0314 09:46:51.737485 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:46:52 crc kubenswrapper[4687]: I0314 09:46:52.737552 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:46:52 crc kubenswrapper[4687]: E0314 09:46:52.737899 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.111263 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.111630 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.111675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.112381 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.112442 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" gracePeriod=600 Mar 14 09:46:54 crc kubenswrapper[4687]: E0314 09:46:54.240647 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.644514 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" exitCode=0 Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.644562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d"} Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.644597 4687 scope.go:117] "RemoveContainer" containerID="a2300932444a6795ada46881b7a419d223c3bffaa3101fe7a362619dd2eb71d5" Mar 14 09:46:54 crc kubenswrapper[4687]: I0314 09:46:54.645255 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:46:54 crc kubenswrapper[4687]: E0314 09:46:54.645528 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:47:03 crc kubenswrapper[4687]: I0314 09:47:03.745543 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:47:03 crc kubenswrapper[4687]: I0314 09:47:03.750698 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:47:03 crc kubenswrapper[4687]: E0314 09:47:03.751454 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:47:04 crc kubenswrapper[4687]: I0314 09:47:04.756428 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c"} Mar 14 09:47:08 crc kubenswrapper[4687]: I0314 09:47:08.737606 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:47:08 crc kubenswrapper[4687]: E0314 09:47:08.738377 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:47:11 crc kubenswrapper[4687]: I0314 09:47:11.823124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c"} Mar 14 09:47:11 crc kubenswrapper[4687]: I0314 09:47:11.823544 4687 scope.go:117] "RemoveContainer" containerID="568fcf3a4996fc8d086ba45bbc1a1803e19581faeb105fc769f62a9bae10e347" Mar 14 09:47:11 crc kubenswrapper[4687]: I0314 09:47:11.823681 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" exitCode=1 Mar 14 09:47:11 crc kubenswrapper[4687]: I0314 09:47:11.826188 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:11 crc kubenswrapper[4687]: E0314 09:47:11.837386 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:12 crc kubenswrapper[4687]: I0314 09:47:12.219892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:47:12 crc kubenswrapper[4687]: I0314 09:47:12.219944 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:47:12 crc kubenswrapper[4687]: I0314 09:47:12.219955 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:47:12 crc kubenswrapper[4687]: I0314 09:47:12.220271 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:47:12 crc kubenswrapper[4687]: I0314 09:47:12.834623 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:12 crc kubenswrapper[4687]: E0314 09:47:12.834892 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:13 crc kubenswrapper[4687]: I0314 09:47:13.841542 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:13 crc kubenswrapper[4687]: E0314 09:47:13.842292 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:17 crc kubenswrapper[4687]: I0314 09:47:17.736757 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:47:17 crc kubenswrapper[4687]: E0314 09:47:17.737119 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:47:21 crc kubenswrapper[4687]: I0314 09:47:21.737495 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:47:21 crc kubenswrapper[4687]: E0314 09:47:21.738329 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:47:28 crc kubenswrapper[4687]: I0314 09:47:28.737735 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:28 crc kubenswrapper[4687]: E0314 09:47:28.738673 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:30 crc kubenswrapper[4687]: I0314 09:47:30.737016 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:47:30 crc kubenswrapper[4687]: I0314 09:47:30.996577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55"} Mar 14 09:47:32 crc kubenswrapper[4687]: I0314 09:47:32.127874 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:47:32 crc kubenswrapper[4687]: I0314 09:47:32.127935 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:47:34 crc kubenswrapper[4687]: I0314 09:47:34.737244 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:47:34 crc kubenswrapper[4687]: E0314 09:47:34.737830 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:47:39 crc kubenswrapper[4687]: I0314 09:47:39.067603 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" exitCode=1 Mar 14 09:47:39 crc kubenswrapper[4687]: I0314 09:47:39.067700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55"} Mar 14 09:47:39 crc kubenswrapper[4687]: I0314 09:47:39.068123 4687 scope.go:117] "RemoveContainer" containerID="519777a86308f6bac18cfd49dbcf224c8c7ef9be10c9cafe3009ec86348dc06f" Mar 14 09:47:39 crc kubenswrapper[4687]: I0314 09:47:39.069785 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:47:39 crc kubenswrapper[4687]: E0314 09:47:39.070189 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:47:40 crc kubenswrapper[4687]: I0314 09:47:40.737091 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:40 crc kubenswrapper[4687]: E0314 09:47:40.737656 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:42 crc kubenswrapper[4687]: I0314 09:47:42.128354 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:47:42 crc kubenswrapper[4687]: I0314 09:47:42.128633 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:47:42 crc kubenswrapper[4687]: I0314 09:47:42.129472 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:47:42 crc kubenswrapper[4687]: E0314 09:47:42.129739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:47:48 crc kubenswrapper[4687]: I0314 09:47:48.737040 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:47:48 crc kubenswrapper[4687]: E0314 09:47:48.737738 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:47:54 crc kubenswrapper[4687]: I0314 09:47:54.737925 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:47:54 crc kubenswrapper[4687]: I0314 09:47:54.740071 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:47:54 crc kubenswrapper[4687]: E0314 09:47:54.740385 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:47:54 crc kubenswrapper[4687]: E0314 09:47:54.740733 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.142367 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558028-qqfxw"] Mar 14 09:48:00 crc kubenswrapper[4687]: E0314 09:48:00.143234 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb016512-0cc0-428e-a0d1-f1beee23e06f" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.143246 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb016512-0cc0-428e-a0d1-f1beee23e06f" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.143476 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb016512-0cc0-428e-a0d1-f1beee23e06f" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.147729 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.151213 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.151221 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.152801 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-qqfxw"] Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.152960 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.261631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl8x\" (UniqueName: \"kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x\") pod \"auto-csr-approver-29558028-qqfxw\" (UID: \"5fcf385c-b4fc-41c1-a17f-99eb29786fec\") " pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.364136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl8x\" (UniqueName: \"kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x\") pod \"auto-csr-approver-29558028-qqfxw\" (UID: \"5fcf385c-b4fc-41c1-a17f-99eb29786fec\") " pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.382957 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl8x\" (UniqueName: \"kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x\") pod \"auto-csr-approver-29558028-qqfxw\" (UID: \"5fcf385c-b4fc-41c1-a17f-99eb29786fec\") " pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.471703 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:00 crc kubenswrapper[4687]: I0314 09:48:00.927401 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-qqfxw"] Mar 14 09:48:01 crc kubenswrapper[4687]: I0314 09:48:01.298770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" event={"ID":"5fcf385c-b4fc-41c1-a17f-99eb29786fec","Type":"ContainerStarted","Data":"e4e966f51dd0523e3d0dbb06eb3b7581c67e70855193131d0b17b765c608f1b5"} Mar 14 09:48:02 crc kubenswrapper[4687]: I0314 09:48:02.307904 4687 generic.go:334] "Generic (PLEG): container finished" podID="5fcf385c-b4fc-41c1-a17f-99eb29786fec" containerID="c8d85eeb3c703fc6064684dd255bf2d52668e134df9b0e8780fcb8e91fb88eef" exitCode=0 Mar 14 09:48:02 crc kubenswrapper[4687]: I0314 09:48:02.307954 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" event={"ID":"5fcf385c-b4fc-41c1-a17f-99eb29786fec","Type":"ContainerDied","Data":"c8d85eeb3c703fc6064684dd255bf2d52668e134df9b0e8780fcb8e91fb88eef"} Mar 14 09:48:03 crc kubenswrapper[4687]: I0314 09:48:03.665601 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:03 crc kubenswrapper[4687]: I0314 09:48:03.737701 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:48:03 crc kubenswrapper[4687]: E0314 09:48:03.737952 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:48:03 crc kubenswrapper[4687]: I0314 09:48:03.840244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl8x\" (UniqueName: \"kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x\") pod \"5fcf385c-b4fc-41c1-a17f-99eb29786fec\" (UID: \"5fcf385c-b4fc-41c1-a17f-99eb29786fec\") " Mar 14 09:48:03 crc kubenswrapper[4687]: I0314 09:48:03.848644 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x" (OuterVolumeSpecName: "kube-api-access-2zl8x") pod "5fcf385c-b4fc-41c1-a17f-99eb29786fec" (UID: "5fcf385c-b4fc-41c1-a17f-99eb29786fec"). InnerVolumeSpecName "kube-api-access-2zl8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:48:03 crc kubenswrapper[4687]: I0314 09:48:03.945172 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zl8x\" (UniqueName: \"kubernetes.io/projected/5fcf385c-b4fc-41c1-a17f-99eb29786fec-kube-api-access-2zl8x\") on node \"crc\" DevicePath \"\"" Mar 14 09:48:04 crc kubenswrapper[4687]: I0314 09:48:04.328927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" event={"ID":"5fcf385c-b4fc-41c1-a17f-99eb29786fec","Type":"ContainerDied","Data":"e4e966f51dd0523e3d0dbb06eb3b7581c67e70855193131d0b17b765c608f1b5"} Mar 14 09:48:04 crc kubenswrapper[4687]: I0314 09:48:04.328969 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4e966f51dd0523e3d0dbb06eb3b7581c67e70855193131d0b17b765c608f1b5" Mar 14 09:48:04 crc kubenswrapper[4687]: I0314 09:48:04.329731 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-qqfxw" Mar 14 09:48:04 crc kubenswrapper[4687]: I0314 09:48:04.733061 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-k7pr2"] Mar 14 09:48:04 crc kubenswrapper[4687]: I0314 09:48:04.740994 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-k7pr2"] Mar 14 09:48:05 crc kubenswrapper[4687]: I0314 09:48:05.748562 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7994d09b-32ad-4f22-a917-1b1aeb49e4c3" path="/var/lib/kubelet/pods/7994d09b-32ad-4f22-a917-1b1aeb49e4c3/volumes" Mar 14 09:48:06 crc kubenswrapper[4687]: I0314 09:48:06.736670 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:48:06 crc kubenswrapper[4687]: E0314 09:48:06.736865 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:48:09 crc kubenswrapper[4687]: I0314 09:48:09.738069 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:48:09 crc kubenswrapper[4687]: E0314 09:48:09.738616 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:48:14 crc kubenswrapper[4687]: I0314 09:48:14.736959 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:48:14 crc kubenswrapper[4687]: E0314 09:48:14.737561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:48:17 crc kubenswrapper[4687]: I0314 09:48:17.737586 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:48:17 crc kubenswrapper[4687]: E0314 09:48:17.738510 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:48:23 crc kubenswrapper[4687]: I0314 09:48:23.737706 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:48:23 crc kubenswrapper[4687]: E0314 09:48:23.738591 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:48:27 crc kubenswrapper[4687]: I0314 09:48:27.737809 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:48:27 crc kubenswrapper[4687]: E0314 09:48:27.738771 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:48:28 crc kubenswrapper[4687]: I0314 09:48:28.736630 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:48:28 crc kubenswrapper[4687]: E0314 09:48:28.736967 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:48:29 crc kubenswrapper[4687]: I0314 09:48:29.886850 4687 scope.go:117] "RemoveContainer" containerID="18d44d91332442569f3a0c688efeeed16158d071a6f46f7c3d3db1e407bc5b9f" Mar 14 09:48:35 crc kubenswrapper[4687]: I0314 09:48:35.743327 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:48:35 crc kubenswrapper[4687]: E0314 09:48:35.744059 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:48:39 crc kubenswrapper[4687]: I0314 09:48:39.737076 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:48:39 crc kubenswrapper[4687]: I0314 09:48:39.738460 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:48:39 crc kubenswrapper[4687]: E0314 09:48:39.738649 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:48:39 crc kubenswrapper[4687]: E0314 09:48:39.738838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:48:49 crc kubenswrapper[4687]: I0314 09:48:49.740610 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:48:49 crc kubenswrapper[4687]: E0314 09:48:49.742808 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:48:52 crc kubenswrapper[4687]: I0314 09:48:52.737041 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:48:52 crc kubenswrapper[4687]: E0314 09:48:52.737692 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:48:54 crc kubenswrapper[4687]: I0314 09:48:54.738381 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:48:54 crc kubenswrapper[4687]: E0314 09:48:54.739264 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:49:03 crc kubenswrapper[4687]: I0314 09:49:03.737757 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:49:03 crc kubenswrapper[4687]: I0314 09:49:03.738594 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:49:03 crc kubenswrapper[4687]: E0314 09:49:03.738954 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:49:03 crc kubenswrapper[4687]: E0314 09:49:03.739060 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:49:05 crc kubenswrapper[4687]: I0314 09:49:05.743320 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:49:05 crc kubenswrapper[4687]: E0314 09:49:05.743844 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:49:14 crc kubenswrapper[4687]: I0314 09:49:14.737391 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:49:14 crc kubenswrapper[4687]: E0314 09:49:14.738114 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:49:17 crc kubenswrapper[4687]: I0314 09:49:17.737552 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:49:17 crc kubenswrapper[4687]: E0314 09:49:17.738134 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:49:17 crc kubenswrapper[4687]: I0314 09:49:17.738523 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:49:17 crc kubenswrapper[4687]: E0314 09:49:17.738852 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:49:29 crc kubenswrapper[4687]: I0314 09:49:29.736273 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:49:29 crc kubenswrapper[4687]: I0314 09:49:29.736794 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:49:29 crc kubenswrapper[4687]: E0314 09:49:29.736929 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:49:29 crc kubenswrapper[4687]: E0314 09:49:29.737201 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:49:30 crc kubenswrapper[4687]: I0314 09:49:30.736507 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:49:30 crc kubenswrapper[4687]: E0314 09:49:30.736777 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:49:40 crc kubenswrapper[4687]: I0314 09:49:40.737389 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:49:40 crc kubenswrapper[4687]: I0314 09:49:40.737968 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:49:40 crc kubenswrapper[4687]: E0314 09:49:40.738191 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:49:40 crc kubenswrapper[4687]: E0314 09:49:40.738419 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:49:41 crc kubenswrapper[4687]: I0314 09:49:41.737293 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:49:41 crc kubenswrapper[4687]: E0314 09:49:41.737762 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:49:53 crc kubenswrapper[4687]: I0314 09:49:53.737326 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:49:53 crc kubenswrapper[4687]: E0314 09:49:53.738035 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:49:53 crc kubenswrapper[4687]: I0314 09:49:53.738067 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:49:53 crc kubenswrapper[4687]: E0314 09:49:53.738299 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:49:54 crc kubenswrapper[4687]: I0314 09:49:54.737797 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:49:54 crc kubenswrapper[4687]: E0314 09:49:54.738024 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.147844 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558030-vsh4v"] Mar 14 09:50:00 crc kubenswrapper[4687]: E0314 09:50:00.149129 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf385c-b4fc-41c1-a17f-99eb29786fec" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.149153 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf385c-b4fc-41c1-a17f-99eb29786fec" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.149600 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf385c-b4fc-41c1-a17f-99eb29786fec" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.150781 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.154159 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.154261 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.154424 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.164941 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-vsh4v"] Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.277535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlsg\" (UniqueName: \"kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg\") pod \"auto-csr-approver-29558030-vsh4v\" (UID: \"08f0c21c-d464-4362-aa44-713697871f0b\") " pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.379918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czlsg\" (UniqueName: \"kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg\") pod \"auto-csr-approver-29558030-vsh4v\" (UID: \"08f0c21c-d464-4362-aa44-713697871f0b\") " pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.399520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czlsg\" (UniqueName: \"kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg\") pod \"auto-csr-approver-29558030-vsh4v\" (UID: \"08f0c21c-d464-4362-aa44-713697871f0b\") " pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.481213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:00 crc kubenswrapper[4687]: I0314 09:50:00.906760 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-vsh4v"] Mar 14 09:50:00 crc kubenswrapper[4687]: W0314 09:50:00.918229 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08f0c21c_d464_4362_aa44_713697871f0b.slice/crio-8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da WatchSource:0}: Error finding container 8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da: Status 404 returned error can't find the container with id 8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da Mar 14 09:50:01 crc kubenswrapper[4687]: I0314 09:50:01.374405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" event={"ID":"08f0c21c-d464-4362-aa44-713697871f0b","Type":"ContainerStarted","Data":"8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da"} Mar 14 09:50:02 crc kubenswrapper[4687]: I0314 09:50:02.389100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" event={"ID":"08f0c21c-d464-4362-aa44-713697871f0b","Type":"ContainerStarted","Data":"4a05b5ef6e6bf506c963f5724800ef716cf437947271cace3f0eecdc25d7636b"} Mar 14 09:50:02 crc kubenswrapper[4687]: I0314 09:50:02.401908 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" podStartSLOduration=1.327611477 podStartE2EDuration="2.401888022s" podCreationTimestamp="2026-03-14 09:50:00 +0000 UTC" firstStartedPulling="2026-03-14 09:50:00.920453293 +0000 UTC m=+3185.908693658" lastFinishedPulling="2026-03-14 09:50:01.994729788 +0000 UTC m=+3186.982970203" observedRunningTime="2026-03-14 09:50:02.400445246 +0000 UTC m=+3187.388685631" watchObservedRunningTime="2026-03-14 09:50:02.401888022 +0000 UTC m=+3187.390128407" Mar 14 09:50:03 crc kubenswrapper[4687]: I0314 09:50:03.401963 4687 generic.go:334] "Generic (PLEG): container finished" podID="08f0c21c-d464-4362-aa44-713697871f0b" containerID="4a05b5ef6e6bf506c963f5724800ef716cf437947271cace3f0eecdc25d7636b" exitCode=0 Mar 14 09:50:03 crc kubenswrapper[4687]: I0314 09:50:03.402012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" event={"ID":"08f0c21c-d464-4362-aa44-713697871f0b","Type":"ContainerDied","Data":"4a05b5ef6e6bf506c963f5724800ef716cf437947271cace3f0eecdc25d7636b"} Mar 14 09:50:04 crc kubenswrapper[4687]: I0314 09:50:04.764138 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:04 crc kubenswrapper[4687]: I0314 09:50:04.904676 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czlsg\" (UniqueName: \"kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg\") pod \"08f0c21c-d464-4362-aa44-713697871f0b\" (UID: \"08f0c21c-d464-4362-aa44-713697871f0b\") " Mar 14 09:50:04 crc kubenswrapper[4687]: I0314 09:50:04.911051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg" (OuterVolumeSpecName: "kube-api-access-czlsg") pod "08f0c21c-d464-4362-aa44-713697871f0b" (UID: "08f0c21c-d464-4362-aa44-713697871f0b"). InnerVolumeSpecName "kube-api-access-czlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.007241 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czlsg\" (UniqueName: \"kubernetes.io/projected/08f0c21c-d464-4362-aa44-713697871f0b-kube-api-access-czlsg\") on node \"crc\" DevicePath \"\"" Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.422997 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" event={"ID":"08f0c21c-d464-4362-aa44-713697871f0b","Type":"ContainerDied","Data":"8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da"} Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.423036 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-vsh4v" Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.423044 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebbdf0f288d368da86b5c0841df105fad9338f10d9d0ecb1818225c134972da" Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.490980 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-wcl6r"] Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.500264 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-wcl6r"] Mar 14 09:50:05 crc kubenswrapper[4687]: I0314 09:50:05.759788 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e024b5b0-c2cf-479c-843f-f25b799a95f9" path="/var/lib/kubelet/pods/e024b5b0-c2cf-479c-843f-f25b799a95f9/volumes" Mar 14 09:50:06 crc kubenswrapper[4687]: I0314 09:50:06.737307 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:50:06 crc kubenswrapper[4687]: I0314 09:50:06.737635 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:50:06 crc kubenswrapper[4687]: E0314 09:50:06.737784 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:50:06 crc kubenswrapper[4687]: E0314 09:50:06.737861 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:07 crc kubenswrapper[4687]: I0314 09:50:07.737871 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:50:07 crc kubenswrapper[4687]: E0314 09:50:07.738248 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:50:17 crc kubenswrapper[4687]: I0314 09:50:17.737078 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:50:17 crc kubenswrapper[4687]: E0314 09:50:17.737859 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:18 crc kubenswrapper[4687]: I0314 09:50:18.738131 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:50:18 crc kubenswrapper[4687]: E0314 09:50:18.738679 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:50:20 crc kubenswrapper[4687]: I0314 09:50:20.737675 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:50:20 crc kubenswrapper[4687]: E0314 09:50:20.738362 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:50:29 crc kubenswrapper[4687]: I0314 09:50:29.737127 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:50:29 crc kubenswrapper[4687]: E0314 09:50:29.738198 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:29 crc kubenswrapper[4687]: I0314 09:50:29.988657 4687 scope.go:117] "RemoveContainer" containerID="b1bbcde0e140286a6a38dc3ccada25744322c391bfa0791bb3371804bceb85b6" Mar 14 09:50:32 crc kubenswrapper[4687]: I0314 09:50:32.737606 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:50:32 crc kubenswrapper[4687]: E0314 09:50:32.738789 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:50:34 crc kubenswrapper[4687]: I0314 09:50:34.737747 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:50:34 crc kubenswrapper[4687]: E0314 09:50:34.738943 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:50:42 crc kubenswrapper[4687]: I0314 09:50:42.737227 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:50:42 crc kubenswrapper[4687]: E0314 09:50:42.737826 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:44 crc kubenswrapper[4687]: I0314 09:50:44.737110 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:50:44 crc kubenswrapper[4687]: E0314 09:50:44.737523 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:50:48 crc kubenswrapper[4687]: I0314 09:50:48.736523 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:50:48 crc kubenswrapper[4687]: E0314 09:50:48.737396 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:50:57 crc kubenswrapper[4687]: I0314 09:50:57.737959 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:50:57 crc kubenswrapper[4687]: E0314 09:50:57.738642 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:50:58 crc kubenswrapper[4687]: I0314 09:50:58.737207 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:50:58 crc kubenswrapper[4687]: E0314 09:50:58.737633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:51:01 crc kubenswrapper[4687]: I0314 09:51:01.736887 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:51:01 crc kubenswrapper[4687]: E0314 09:51:01.737617 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:51:09 crc kubenswrapper[4687]: I0314 09:51:09.738054 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:51:09 crc kubenswrapper[4687]: E0314 09:51:09.739199 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:51:10 crc kubenswrapper[4687]: I0314 09:51:10.736920 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:51:10 crc kubenswrapper[4687]: E0314 09:51:10.737617 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:51:13 crc kubenswrapper[4687]: I0314 09:51:13.737642 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:51:13 crc kubenswrapper[4687]: E0314 09:51:13.738305 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:51:24 crc kubenswrapper[4687]: I0314 09:51:24.737836 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:51:24 crc kubenswrapper[4687]: E0314 09:51:24.739083 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:51:25 crc kubenswrapper[4687]: I0314 09:51:25.749190 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:51:25 crc kubenswrapper[4687]: E0314 09:51:25.750016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:51:26 crc kubenswrapper[4687]: I0314 09:51:26.736769 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:51:26 crc kubenswrapper[4687]: E0314 09:51:26.737305 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.509723 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:51:37 crc kubenswrapper[4687]: E0314 09:51:37.510635 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f0c21c-d464-4362-aa44-713697871f0b" containerName="oc" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.510648 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f0c21c-d464-4362-aa44-713697871f0b" containerName="oc" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.510849 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f0c21c-d464-4362-aa44-713697871f0b" containerName="oc" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.512249 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.521125 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.589002 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.589158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7t9\" (UniqueName: \"kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.589210 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.691265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7t9\" (UniqueName: \"kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.691323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.691538 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.692028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.692072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.711494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7t9\" (UniqueName: \"kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9\") pod \"redhat-operators-tgcml\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.737439 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:51:37 crc kubenswrapper[4687]: E0314 09:51:37.737673 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:51:37 crc kubenswrapper[4687]: I0314 09:51:37.841262 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:38 crc kubenswrapper[4687]: I0314 09:51:38.353098 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:51:39 crc kubenswrapper[4687]: I0314 09:51:39.371859 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerID="ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101" exitCode=0 Mar 14 09:51:39 crc kubenswrapper[4687]: I0314 09:51:39.372354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerDied","Data":"ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101"} Mar 14 09:51:39 crc kubenswrapper[4687]: I0314 09:51:39.372385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerStarted","Data":"6fd9e69b6931fe567e1eb888fdebb96244c96f472b8b339176be9f8e22337fbb"} Mar 14 09:51:39 crc kubenswrapper[4687]: I0314 09:51:39.374998 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:51:39 crc kubenswrapper[4687]: I0314 09:51:39.736525 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:51:39 crc kubenswrapper[4687]: E0314 09:51:39.736888 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:51:40 crc kubenswrapper[4687]: I0314 09:51:40.386668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerStarted","Data":"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9"} Mar 14 09:51:40 crc kubenswrapper[4687]: I0314 09:51:40.737033 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:51:40 crc kubenswrapper[4687]: E0314 09:51:40.737362 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:51:44 crc kubenswrapper[4687]: I0314 09:51:44.431815 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerID="7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9" exitCode=0 Mar 14 09:51:44 crc kubenswrapper[4687]: I0314 09:51:44.431873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerDied","Data":"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9"} Mar 14 09:51:45 crc kubenswrapper[4687]: I0314 09:51:45.445064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerStarted","Data":"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89"} Mar 14 09:51:45 crc kubenswrapper[4687]: I0314 09:51:45.466278 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tgcml" podStartSLOduration=2.9952083910000002 podStartE2EDuration="8.466255841s" podCreationTimestamp="2026-03-14 09:51:37 +0000 UTC" firstStartedPulling="2026-03-14 09:51:39.374742431 +0000 UTC m=+3284.362982806" lastFinishedPulling="2026-03-14 09:51:44.845789871 +0000 UTC m=+3289.834030256" observedRunningTime="2026-03-14 09:51:45.458647214 +0000 UTC m=+3290.446887609" watchObservedRunningTime="2026-03-14 09:51:45.466255841 +0000 UTC m=+3290.454496216" Mar 14 09:51:47 crc kubenswrapper[4687]: I0314 09:51:47.842849 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:47 crc kubenswrapper[4687]: I0314 09:51:47.843206 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:48 crc kubenswrapper[4687]: I0314 09:51:48.915864 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tgcml" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="registry-server" probeResult="failure" output=< Mar 14 09:51:48 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 09:51:48 crc kubenswrapper[4687]: > Mar 14 09:51:52 crc kubenswrapper[4687]: I0314 09:51:52.738443 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:51:52 crc kubenswrapper[4687]: E0314 09:51:52.740584 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:51:53 crc kubenswrapper[4687]: I0314 09:51:53.737660 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:51:53 crc kubenswrapper[4687]: E0314 09:51:53.738184 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:51:55 crc kubenswrapper[4687]: I0314 09:51:55.742755 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:51:56 crc kubenswrapper[4687]: I0314 09:51:56.542294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c"} Mar 14 09:51:57 crc kubenswrapper[4687]: I0314 09:51:57.916851 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:57 crc kubenswrapper[4687]: I0314 09:51:57.970360 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:51:58 crc kubenswrapper[4687]: I0314 09:51:58.151156 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:51:59 crc kubenswrapper[4687]: I0314 09:51:59.566625 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tgcml" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="registry-server" containerID="cri-o://56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89" gracePeriod=2 Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.028212 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.144029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf7t9\" (UniqueName: \"kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9\") pod \"7e6055d2-ca67-49c1-8338-799b099b7efd\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.144128 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content\") pod \"7e6055d2-ca67-49c1-8338-799b099b7efd\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.144196 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities\") pod \"7e6055d2-ca67-49c1-8338-799b099b7efd\" (UID: \"7e6055d2-ca67-49c1-8338-799b099b7efd\") " Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.144883 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities" (OuterVolumeSpecName: "utilities") pod "7e6055d2-ca67-49c1-8338-799b099b7efd" (UID: "7e6055d2-ca67-49c1-8338-799b099b7efd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.146075 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.149957 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9" (OuterVolumeSpecName: "kube-api-access-tf7t9") pod "7e6055d2-ca67-49c1-8338-799b099b7efd" (UID: "7e6055d2-ca67-49c1-8338-799b099b7efd"). InnerVolumeSpecName "kube-api-access-tf7t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.172966 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558032-7r79f"] Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.173758 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="extract-utilities" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.173790 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="extract-utilities" Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.173820 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="extract-content" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.173830 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="extract-content" Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.173889 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="registry-server" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.173897 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="registry-server" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.174187 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerName="registry-server" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.175418 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.177681 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.178638 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.178727 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.189409 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-7r79f"] Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.248198 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrcg\" (UniqueName: \"kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg\") pod \"auto-csr-approver-29558032-7r79f\" (UID: \"f99535b5-bd4a-4f73-baf2-daf532890ea9\") " pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.248481 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf7t9\" (UniqueName: \"kubernetes.io/projected/7e6055d2-ca67-49c1-8338-799b099b7efd-kube-api-access-tf7t9\") on node \"crc\" DevicePath \"\"" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.277436 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6055d2-ca67-49c1-8338-799b099b7efd" (UID: "7e6055d2-ca67-49c1-8338-799b099b7efd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.351458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrcg\" (UniqueName: \"kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg\") pod \"auto-csr-approver-29558032-7r79f\" (UID: \"f99535b5-bd4a-4f73-baf2-daf532890ea9\") " pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.352146 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6055d2-ca67-49c1-8338-799b099b7efd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.370218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrcg\" (UniqueName: \"kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg\") pod \"auto-csr-approver-29558032-7r79f\" (UID: \"f99535b5-bd4a-4f73-baf2-daf532890ea9\") " pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.509673 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.606109 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e6055d2-ca67-49c1-8338-799b099b7efd" containerID="56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89" exitCode=0 Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.606263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerDied","Data":"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89"} Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.606293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgcml" event={"ID":"7e6055d2-ca67-49c1-8338-799b099b7efd","Type":"ContainerDied","Data":"6fd9e69b6931fe567e1eb888fdebb96244c96f472b8b339176be9f8e22337fbb"} Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.606347 4687 scope.go:117] "RemoveContainer" containerID="56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.606523 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgcml" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.637395 4687 scope.go:117] "RemoveContainer" containerID="7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.676302 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.679207 4687 scope.go:117] "RemoveContainer" containerID="ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.714387 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tgcml"] Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.743276 4687 scope.go:117] "RemoveContainer" containerID="56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89" Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.743781 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89\": container with ID starting with 56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89 not found: ID does not exist" containerID="56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.743809 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89"} err="failed to get container status \"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89\": rpc error: code = NotFound desc = could not find container \"56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89\": container with ID starting with 56dd4808f543ca2640c93ec7ed88a32c6766e6d73f5771a43c97190361d3bd89 not found: ID does not exist" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.743831 4687 scope.go:117] "RemoveContainer" containerID="7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9" Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.744101 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9\": container with ID starting with 7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9 not found: ID does not exist" containerID="7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.744158 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9"} err="failed to get container status \"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9\": rpc error: code = NotFound desc = could not find container \"7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9\": container with ID starting with 7706a2b4b88b83525e7460a93e98b55a349ff257f32d2b4a31ef2e9754fe22e9 not found: ID does not exist" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.744192 4687 scope.go:117] "RemoveContainer" containerID="ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101" Mar 14 09:52:00 crc kubenswrapper[4687]: E0314 09:52:00.744536 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101\": container with ID starting with ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101 not found: ID does not exist" containerID="ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.744584 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101"} err="failed to get container status \"ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101\": rpc error: code = NotFound desc = could not find container \"ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101\": container with ID starting with ddd06ad2411bf76953b5eea0ff8a3f8ed7efd286b8d69c1e27f8bf0e42d8b101 not found: ID does not exist" Mar 14 09:52:00 crc kubenswrapper[4687]: I0314 09:52:00.958212 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-7r79f"] Mar 14 09:52:01 crc kubenswrapper[4687]: I0314 09:52:01.627494 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-7r79f" event={"ID":"f99535b5-bd4a-4f73-baf2-daf532890ea9","Type":"ContainerStarted","Data":"0fe713d7100f87b22ca380f02df5ff29b80250af6de8b819edeb5e82378f5624"} Mar 14 09:52:01 crc kubenswrapper[4687]: I0314 09:52:01.748412 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6055d2-ca67-49c1-8338-799b099b7efd" path="/var/lib/kubelet/pods/7e6055d2-ca67-49c1-8338-799b099b7efd/volumes" Mar 14 09:52:02 crc kubenswrapper[4687]: I0314 09:52:02.639991 4687 generic.go:334] "Generic (PLEG): container finished" podID="f99535b5-bd4a-4f73-baf2-daf532890ea9" containerID="b76c45705d4e1a4d7130de457f1f5009d7e9c9e888550fcd7fc6441ff66ef383" exitCode=0 Mar 14 09:52:02 crc kubenswrapper[4687]: I0314 09:52:02.640027 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-7r79f" event={"ID":"f99535b5-bd4a-4f73-baf2-daf532890ea9","Type":"ContainerDied","Data":"b76c45705d4e1a4d7130de457f1f5009d7e9c9e888550fcd7fc6441ff66ef383"} Mar 14 09:52:03 crc kubenswrapper[4687]: I0314 09:52:03.737309 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:52:03 crc kubenswrapper[4687]: E0314 09:52:03.737905 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:52:03 crc kubenswrapper[4687]: I0314 09:52:03.988034 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.036706 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmrcg\" (UniqueName: \"kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg\") pod \"f99535b5-bd4a-4f73-baf2-daf532890ea9\" (UID: \"f99535b5-bd4a-4f73-baf2-daf532890ea9\") " Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.042443 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg" (OuterVolumeSpecName: "kube-api-access-lmrcg") pod "f99535b5-bd4a-4f73-baf2-daf532890ea9" (UID: "f99535b5-bd4a-4f73-baf2-daf532890ea9"). InnerVolumeSpecName "kube-api-access-lmrcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.138776 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmrcg\" (UniqueName: \"kubernetes.io/projected/f99535b5-bd4a-4f73-baf2-daf532890ea9-kube-api-access-lmrcg\") on node \"crc\" DevicePath \"\"" Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.662528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-7r79f" event={"ID":"f99535b5-bd4a-4f73-baf2-daf532890ea9","Type":"ContainerDied","Data":"0fe713d7100f87b22ca380f02df5ff29b80250af6de8b819edeb5e82378f5624"} Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.662600 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe713d7100f87b22ca380f02df5ff29b80250af6de8b819edeb5e82378f5624" Mar 14 09:52:04 crc kubenswrapper[4687]: I0314 09:52:04.662608 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-7r79f" Mar 14 09:52:05 crc kubenswrapper[4687]: I0314 09:52:05.049528 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-dww2x"] Mar 14 09:52:05 crc kubenswrapper[4687]: I0314 09:52:05.059079 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-dww2x"] Mar 14 09:52:05 crc kubenswrapper[4687]: I0314 09:52:05.752686 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb016512-0cc0-428e-a0d1-f1beee23e06f" path="/var/lib/kubelet/pods/bb016512-0cc0-428e-a0d1-f1beee23e06f/volumes" Mar 14 09:52:08 crc kubenswrapper[4687]: I0314 09:52:08.737634 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:52:08 crc kubenswrapper[4687]: E0314 09:52:08.738072 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:52:17 crc kubenswrapper[4687]: I0314 09:52:17.736993 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:52:17 crc kubenswrapper[4687]: E0314 09:52:17.737966 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:52:20 crc kubenswrapper[4687]: I0314 09:52:20.744307 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:52:21 crc kubenswrapper[4687]: I0314 09:52:21.854177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8"} Mar 14 09:52:22 crc kubenswrapper[4687]: I0314 09:52:22.219668 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:52:22 crc kubenswrapper[4687]: I0314 09:52:22.219750 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:52:28 crc kubenswrapper[4687]: I0314 09:52:28.737749 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:52:28 crc kubenswrapper[4687]: E0314 09:52:28.738492 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:52:28 crc kubenswrapper[4687]: I0314 09:52:28.923740 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" exitCode=1 Mar 14 09:52:28 crc kubenswrapper[4687]: I0314 09:52:28.923785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8"} Mar 14 09:52:28 crc kubenswrapper[4687]: I0314 09:52:28.923820 4687 scope.go:117] "RemoveContainer" containerID="fd22761f377527b6eb211f771525523613714b1fcacaf6278f512247ff45513c" Mar 14 09:52:28 crc kubenswrapper[4687]: I0314 09:52:28.924612 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:52:28 crc kubenswrapper[4687]: E0314 09:52:28.924859 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:52:30 crc kubenswrapper[4687]: I0314 09:52:30.079793 4687 scope.go:117] "RemoveContainer" containerID="aa5ce0dd42135800634d70c6a9ca3b7def144c9856a85cbff2ded7baf3d24c78" Mar 14 09:52:32 crc kubenswrapper[4687]: I0314 09:52:32.220687 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:52:32 crc kubenswrapper[4687]: I0314 09:52:32.221785 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:52:32 crc kubenswrapper[4687]: I0314 09:52:32.222633 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:52:32 crc kubenswrapper[4687]: E0314 09:52:32.222862 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:52:42 crc kubenswrapper[4687]: I0314 09:52:42.737672 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:52:43 crc kubenswrapper[4687]: I0314 09:52:43.068224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946"} Mar 14 09:52:43 crc kubenswrapper[4687]: I0314 09:52:43.737480 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:52:43 crc kubenswrapper[4687]: E0314 09:52:43.737901 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:52:51 crc kubenswrapper[4687]: I0314 09:52:51.143374 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" exitCode=1 Mar 14 09:52:51 crc kubenswrapper[4687]: I0314 09:52:51.143445 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946"} Mar 14 09:52:51 crc kubenswrapper[4687]: I0314 09:52:51.144031 4687 scope.go:117] "RemoveContainer" containerID="74d56f761aacaa6486f000e3e6ffc1195b0927b7ad312ca8e3921fa785e63e55" Mar 14 09:52:51 crc kubenswrapper[4687]: I0314 09:52:51.144940 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:52:51 crc kubenswrapper[4687]: E0314 09:52:51.145251 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:52:52 crc kubenswrapper[4687]: I0314 09:52:52.128153 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:52:52 crc kubenswrapper[4687]: I0314 09:52:52.128437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:52:52 crc kubenswrapper[4687]: I0314 09:52:52.128451 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:52:52 crc kubenswrapper[4687]: I0314 09:52:52.128460 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:52:52 crc kubenswrapper[4687]: I0314 09:52:52.154406 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:52:52 crc kubenswrapper[4687]: E0314 09:52:52.154633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:52:56 crc kubenswrapper[4687]: I0314 09:52:56.737625 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:52:56 crc kubenswrapper[4687]: E0314 09:52:56.738303 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:53:06 crc kubenswrapper[4687]: I0314 09:53:06.737817 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:53:06 crc kubenswrapper[4687]: E0314 09:53:06.738668 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:53:09 crc kubenswrapper[4687]: I0314 09:53:09.737754 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:53:09 crc kubenswrapper[4687]: E0314 09:53:09.738376 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:53:20 crc kubenswrapper[4687]: I0314 09:53:20.737315 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:53:20 crc kubenswrapper[4687]: E0314 09:53:20.738355 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:53:23 crc kubenswrapper[4687]: I0314 09:53:23.737718 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:53:23 crc kubenswrapper[4687]: E0314 09:53:23.738387 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:53:33 crc kubenswrapper[4687]: I0314 09:53:33.737652 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:53:33 crc kubenswrapper[4687]: E0314 09:53:33.738473 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:53:36 crc kubenswrapper[4687]: I0314 09:53:36.737235 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:53:36 crc kubenswrapper[4687]: E0314 09:53:36.737739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:53:48 crc kubenswrapper[4687]: I0314 09:53:48.737297 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:53:48 crc kubenswrapper[4687]: E0314 09:53:48.738273 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:53:50 crc kubenswrapper[4687]: I0314 09:53:50.737103 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:53:50 crc kubenswrapper[4687]: E0314 09:53:50.737586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.110993 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:53:59 crc kubenswrapper[4687]: E0314 09:53:59.112259 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99535b5-bd4a-4f73-baf2-daf532890ea9" containerName="oc" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.112280 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99535b5-bd4a-4f73-baf2-daf532890ea9" containerName="oc" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.112634 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99535b5-bd4a-4f73-baf2-daf532890ea9" containerName="oc" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.114593 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.126456 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.165898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.268668 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9ml\" (UniqueName: \"kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.268725 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.268796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.269259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.370172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9ml\" (UniqueName: \"kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.370260 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.370936 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.393951 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9ml\" (UniqueName: \"kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml\") pod \"certified-operators-w4vzr\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.438434 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:53:59 crc kubenswrapper[4687]: I0314 09:53:59.977273 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.145572 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558034-mfqj5"] Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.147114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.152560 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.152997 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.153563 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.163459 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-mfqj5"] Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.293824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxw4\" (UniqueName: \"kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4\") pod \"auto-csr-approver-29558034-mfqj5\" (UID: \"5263ffa9-c4aa-46f1-a90e-95d79cd35119\") " pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.395198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxw4\" (UniqueName: \"kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4\") pod \"auto-csr-approver-29558034-mfqj5\" (UID: \"5263ffa9-c4aa-46f1-a90e-95d79cd35119\") " pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.421983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxw4\" (UniqueName: \"kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4\") pod \"auto-csr-approver-29558034-mfqj5\" (UID: \"5263ffa9-c4aa-46f1-a90e-95d79cd35119\") " pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.463542 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.797925 4687 generic.go:334] "Generic (PLEG): container finished" podID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerID="5ff888c6858ec98e845cd3bcdd76428e1e25f452e2485581f86c919dd1fdb459" exitCode=0 Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.798363 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerDied","Data":"5ff888c6858ec98e845cd3bcdd76428e1e25f452e2485581f86c919dd1fdb459"} Mar 14 09:54:00 crc kubenswrapper[4687]: I0314 09:54:00.798397 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerStarted","Data":"2f94e089fc4ba06aa1238aeede855965af2448753469fd4807e1e8c009b69549"} Mar 14 09:54:01 crc kubenswrapper[4687]: I0314 09:54:01.146866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-mfqj5"] Mar 14 09:54:01 crc kubenswrapper[4687]: W0314 09:54:01.166918 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5263ffa9_c4aa_46f1_a90e_95d79cd35119.slice/crio-81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2 WatchSource:0}: Error finding container 81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2: Status 404 returned error can't find the container with id 81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2 Mar 14 09:54:01 crc kubenswrapper[4687]: I0314 09:54:01.737516 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:54:01 crc kubenswrapper[4687]: E0314 09:54:01.738052 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:54:01 crc kubenswrapper[4687]: I0314 09:54:01.806912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" event={"ID":"5263ffa9-c4aa-46f1-a90e-95d79cd35119","Type":"ContainerStarted","Data":"81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2"} Mar 14 09:54:01 crc kubenswrapper[4687]: I0314 09:54:01.808700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerStarted","Data":"6cce6377159fb7da4fcef3947c369654ec5ebfecbb2e9a3333c0d502006250ce"} Mar 14 09:54:02 crc kubenswrapper[4687]: I0314 09:54:02.736797 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:54:02 crc kubenswrapper[4687]: E0314 09:54:02.737307 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:54:02 crc kubenswrapper[4687]: I0314 09:54:02.818363 4687 generic.go:334] "Generic (PLEG): container finished" podID="5263ffa9-c4aa-46f1-a90e-95d79cd35119" containerID="8d1c9066d7313299219daec14821815b12352301e578bb7d56c12b70a2be0015" exitCode=0 Mar 14 09:54:02 crc kubenswrapper[4687]: I0314 09:54:02.818413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" event={"ID":"5263ffa9-c4aa-46f1-a90e-95d79cd35119","Type":"ContainerDied","Data":"8d1c9066d7313299219daec14821815b12352301e578bb7d56c12b70a2be0015"} Mar 14 09:54:03 crc kubenswrapper[4687]: I0314 09:54:03.831635 4687 generic.go:334] "Generic (PLEG): container finished" podID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerID="6cce6377159fb7da4fcef3947c369654ec5ebfecbb2e9a3333c0d502006250ce" exitCode=0 Mar 14 09:54:03 crc kubenswrapper[4687]: I0314 09:54:03.831786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerDied","Data":"6cce6377159fb7da4fcef3947c369654ec5ebfecbb2e9a3333c0d502006250ce"} Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.240380 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.382714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxw4\" (UniqueName: \"kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4\") pod \"5263ffa9-c4aa-46f1-a90e-95d79cd35119\" (UID: \"5263ffa9-c4aa-46f1-a90e-95d79cd35119\") " Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.391152 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4" (OuterVolumeSpecName: "kube-api-access-6jxw4") pod "5263ffa9-c4aa-46f1-a90e-95d79cd35119" (UID: "5263ffa9-c4aa-46f1-a90e-95d79cd35119"). InnerVolumeSpecName "kube-api-access-6jxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.485957 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxw4\" (UniqueName: \"kubernetes.io/projected/5263ffa9-c4aa-46f1-a90e-95d79cd35119-kube-api-access-6jxw4\") on node \"crc\" DevicePath \"\"" Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.843441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerStarted","Data":"ffc606fabedbc5c4e468966c010b5c9b7c2a7b3db367742eb925fdf0ba1188c3"} Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.845730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" event={"ID":"5263ffa9-c4aa-46f1-a90e-95d79cd35119","Type":"ContainerDied","Data":"81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2"} Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.845772 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a9f5789c74d9d97f1705bd10302eea54fc90a1dcd4f16b6b65ca866423a8e2" Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.845814 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-mfqj5" Mar 14 09:54:04 crc kubenswrapper[4687]: I0314 09:54:04.868408 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4vzr" podStartSLOduration=2.458322362 podStartE2EDuration="5.868386725s" podCreationTimestamp="2026-03-14 09:53:59 +0000 UTC" firstStartedPulling="2026-03-14 09:54:00.800767169 +0000 UTC m=+3425.789007544" lastFinishedPulling="2026-03-14 09:54:04.210831532 +0000 UTC m=+3429.199071907" observedRunningTime="2026-03-14 09:54:04.866684713 +0000 UTC m=+3429.854925108" watchObservedRunningTime="2026-03-14 09:54:04.868386725 +0000 UTC m=+3429.856627120" Mar 14 09:54:05 crc kubenswrapper[4687]: I0314 09:54:05.317378 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-qqfxw"] Mar 14 09:54:05 crc kubenswrapper[4687]: I0314 09:54:05.327241 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-qqfxw"] Mar 14 09:54:05 crc kubenswrapper[4687]: I0314 09:54:05.747933 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcf385c-b4fc-41c1-a17f-99eb29786fec" path="/var/lib/kubelet/pods/5fcf385c-b4fc-41c1-a17f-99eb29786fec/volumes" Mar 14 09:54:09 crc kubenswrapper[4687]: I0314 09:54:09.439792 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:09 crc kubenswrapper[4687]: I0314 09:54:09.440483 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:09 crc kubenswrapper[4687]: I0314 09:54:09.494663 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:09 crc kubenswrapper[4687]: I0314 09:54:09.938606 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:09 crc kubenswrapper[4687]: I0314 09:54:09.987596 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:54:11 crc kubenswrapper[4687]: I0314 09:54:11.916642 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4vzr" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="registry-server" containerID="cri-o://ffc606fabedbc5c4e468966c010b5c9b7c2a7b3db367742eb925fdf0ba1188c3" gracePeriod=2 Mar 14 09:54:12 crc kubenswrapper[4687]: I0314 09:54:12.927795 4687 generic.go:334] "Generic (PLEG): container finished" podID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerID="ffc606fabedbc5c4e468966c010b5c9b7c2a7b3db367742eb925fdf0ba1188c3" exitCode=0 Mar 14 09:54:12 crc kubenswrapper[4687]: I0314 09:54:12.927840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerDied","Data":"ffc606fabedbc5c4e468966c010b5c9b7c2a7b3db367742eb925fdf0ba1188c3"} Mar 14 09:54:12 crc kubenswrapper[4687]: I0314 09:54:12.928364 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4vzr" event={"ID":"01c812fc-b450-4bcb-92aa-dea32e8dc962","Type":"ContainerDied","Data":"2f94e089fc4ba06aa1238aeede855965af2448753469fd4807e1e8c009b69549"} Mar 14 09:54:12 crc kubenswrapper[4687]: I0314 09:54:12.928387 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f94e089fc4ba06aa1238aeede855965af2448753469fd4807e1e8c009b69549" Mar 14 09:54:12 crc kubenswrapper[4687]: I0314 09:54:12.987513 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.046885 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9ml\" (UniqueName: \"kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml\") pod \"01c812fc-b450-4bcb-92aa-dea32e8dc962\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.047074 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities\") pod \"01c812fc-b450-4bcb-92aa-dea32e8dc962\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.047192 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content\") pod \"01c812fc-b450-4bcb-92aa-dea32e8dc962\" (UID: \"01c812fc-b450-4bcb-92aa-dea32e8dc962\") " Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.048905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities" (OuterVolumeSpecName: "utilities") pod "01c812fc-b450-4bcb-92aa-dea32e8dc962" (UID: "01c812fc-b450-4bcb-92aa-dea32e8dc962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.057447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml" (OuterVolumeSpecName: "kube-api-access-ck9ml") pod "01c812fc-b450-4bcb-92aa-dea32e8dc962" (UID: "01c812fc-b450-4bcb-92aa-dea32e8dc962"). InnerVolumeSpecName "kube-api-access-ck9ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.101685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01c812fc-b450-4bcb-92aa-dea32e8dc962" (UID: "01c812fc-b450-4bcb-92aa-dea32e8dc962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.150974 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck9ml\" (UniqueName: \"kubernetes.io/projected/01c812fc-b450-4bcb-92aa-dea32e8dc962-kube-api-access-ck9ml\") on node \"crc\" DevicePath \"\"" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.151013 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.151023 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c812fc-b450-4bcb-92aa-dea32e8dc962-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.936927 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4vzr" Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.963192 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:54:13 crc kubenswrapper[4687]: I0314 09:54:13.973770 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4vzr"] Mar 14 09:54:15 crc kubenswrapper[4687]: I0314 09:54:15.746291 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:54:15 crc kubenswrapper[4687]: E0314 09:54:15.746539 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:54:15 crc kubenswrapper[4687]: I0314 09:54:15.746572 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:54:15 crc kubenswrapper[4687]: E0314 09:54:15.746921 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:54:15 crc kubenswrapper[4687]: I0314 09:54:15.750354 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" path="/var/lib/kubelet/pods/01c812fc-b450-4bcb-92aa-dea32e8dc962/volumes" Mar 14 09:54:24 crc kubenswrapper[4687]: I0314 09:54:24.112023 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:54:24 crc kubenswrapper[4687]: I0314 09:54:24.112627 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:54:30 crc kubenswrapper[4687]: I0314 09:54:30.186552 4687 scope.go:117] "RemoveContainer" containerID="c8d85eeb3c703fc6064684dd255bf2d52668e134df9b0e8780fcb8e91fb88eef" Mar 14 09:54:30 crc kubenswrapper[4687]: I0314 09:54:30.737883 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:54:30 crc kubenswrapper[4687]: I0314 09:54:30.738075 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:54:30 crc kubenswrapper[4687]: E0314 09:54:30.738111 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:54:30 crc kubenswrapper[4687]: E0314 09:54:30.739431 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:54:42 crc kubenswrapper[4687]: I0314 09:54:42.737412 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:54:42 crc kubenswrapper[4687]: E0314 09:54:42.738117 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:54:45 crc kubenswrapper[4687]: I0314 09:54:45.742798 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:54:45 crc kubenswrapper[4687]: E0314 09:54:45.743512 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:54:53 crc kubenswrapper[4687]: I0314 09:54:53.737132 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:54:53 crc kubenswrapper[4687]: E0314 09:54:53.738001 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:54:54 crc kubenswrapper[4687]: I0314 09:54:54.111252 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:54:54 crc kubenswrapper[4687]: I0314 09:54:54.111720 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:55:00 crc kubenswrapper[4687]: I0314 09:55:00.736885 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:55:00 crc kubenswrapper[4687]: E0314 09:55:00.737667 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:55:07 crc kubenswrapper[4687]: I0314 09:55:07.738040 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:55:07 crc kubenswrapper[4687]: E0314 09:55:07.738851 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:55:14 crc kubenswrapper[4687]: I0314 09:55:14.737775 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:55:14 crc kubenswrapper[4687]: E0314 09:55:14.738594 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:55:22 crc kubenswrapper[4687]: I0314 09:55:22.736533 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:55:22 crc kubenswrapper[4687]: E0314 09:55:22.737265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.111553 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.111891 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.111942 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.112857 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.112930 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c" gracePeriod=600 Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.967117 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c" exitCode=0 Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.967202 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c"} Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.967675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5"} Mar 14 09:55:24 crc kubenswrapper[4687]: I0314 09:55:24.967701 4687 scope.go:117] "RemoveContainer" containerID="d582f927ebccf2662e4634ae0b192ac4129fed076e32aedc1854f5a34ac4839d" Mar 14 09:55:25 crc kubenswrapper[4687]: I0314 09:55:25.742813 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:55:25 crc kubenswrapper[4687]: E0314 09:55:25.743096 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:55:34 crc kubenswrapper[4687]: I0314 09:55:34.737672 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:55:34 crc kubenswrapper[4687]: E0314 09:55:34.738921 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:55:40 crc kubenswrapper[4687]: I0314 09:55:40.737913 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:55:40 crc kubenswrapper[4687]: E0314 09:55:40.740390 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:55:45 crc kubenswrapper[4687]: I0314 09:55:45.743566 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:55:45 crc kubenswrapper[4687]: E0314 09:55:45.744250 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.000105 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:55:51 crc kubenswrapper[4687]: E0314 09:55:51.001303 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="extract-utilities" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001319 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="extract-utilities" Mar 14 09:55:51 crc kubenswrapper[4687]: E0314 09:55:51.001368 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5263ffa9-c4aa-46f1-a90e-95d79cd35119" containerName="oc" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001377 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5263ffa9-c4aa-46f1-a90e-95d79cd35119" containerName="oc" Mar 14 09:55:51 crc kubenswrapper[4687]: E0314 09:55:51.001408 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="extract-content" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001418 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="extract-content" Mar 14 09:55:51 crc kubenswrapper[4687]: E0314 09:55:51.001429 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="registry-server" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001436 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="registry-server" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001687 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5263ffa9-c4aa-46f1-a90e-95d79cd35119" containerName="oc" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.001701 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c812fc-b450-4bcb-92aa-dea32e8dc962" containerName="registry-server" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.003682 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.013845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.017044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjr9\" (UniqueName: \"kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.017220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.017256 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.119500 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.119563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.119692 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjr9\" (UniqueName: \"kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.120061 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.120076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.141959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjr9\" (UniqueName: \"kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9\") pod \"community-operators-9lhtr\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.335585 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.611027 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.614040 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.625109 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.632100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.632265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhg2k\" (UniqueName: \"kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.632299 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.734174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhg2k\" (UniqueName: \"kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.734250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.734295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.734823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.734970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.738319 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:55:51 crc kubenswrapper[4687]: E0314 09:55:51.738560 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.754231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhg2k\" (UniqueName: \"kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k\") pod \"redhat-marketplace-tmnj5\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.871856 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:55:51 crc kubenswrapper[4687]: I0314 09:55:51.941030 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:55:52 crc kubenswrapper[4687]: I0314 09:55:52.212319 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:55:52 crc kubenswrapper[4687]: I0314 09:55:52.228756 4687 generic.go:334] "Generic (PLEG): container finished" podID="a986ee1c-e47d-4524-9723-0009f51c1133" containerID="31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d" exitCode=0 Mar 14 09:55:52 crc kubenswrapper[4687]: I0314 09:55:52.228877 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerDied","Data":"31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d"} Mar 14 09:55:52 crc kubenswrapper[4687]: I0314 09:55:52.228957 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerStarted","Data":"c022cdb115811664d7d1558023acf311e119881992acbb6de8decd891305e1bf"} Mar 14 09:55:53 crc kubenswrapper[4687]: I0314 09:55:53.240237 4687 generic.go:334] "Generic (PLEG): container finished" podID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerID="9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc" exitCode=0 Mar 14 09:55:53 crc kubenswrapper[4687]: I0314 09:55:53.241892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerDied","Data":"9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc"} Mar 14 09:55:53 crc kubenswrapper[4687]: I0314 09:55:53.241933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerStarted","Data":"41d2adc661f012e624dc312c74941a4c892e2488ecc75a05c0fad4ce4a26b568"} Mar 14 09:55:53 crc kubenswrapper[4687]: I0314 09:55:53.244543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerStarted","Data":"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767"} Mar 14 09:55:54 crc kubenswrapper[4687]: I0314 09:55:54.257619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerStarted","Data":"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71"} Mar 14 09:55:54 crc kubenswrapper[4687]: I0314 09:55:54.261151 4687 generic.go:334] "Generic (PLEG): container finished" podID="a986ee1c-e47d-4524-9723-0009f51c1133" containerID="01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767" exitCode=0 Mar 14 09:55:54 crc kubenswrapper[4687]: I0314 09:55:54.261209 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerDied","Data":"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767"} Mar 14 09:55:55 crc kubenswrapper[4687]: I0314 09:55:55.270460 4687 generic.go:334] "Generic (PLEG): container finished" podID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerID="ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71" exitCode=0 Mar 14 09:55:55 crc kubenswrapper[4687]: I0314 09:55:55.270558 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerDied","Data":"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71"} Mar 14 09:55:55 crc kubenswrapper[4687]: I0314 09:55:55.273953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerStarted","Data":"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6"} Mar 14 09:55:55 crc kubenswrapper[4687]: I0314 09:55:55.331074 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9lhtr" podStartSLOduration=2.9025652060000002 podStartE2EDuration="5.331053663s" podCreationTimestamp="2026-03-14 09:55:50 +0000 UTC" firstStartedPulling="2026-03-14 09:55:52.230387934 +0000 UTC m=+3537.218628309" lastFinishedPulling="2026-03-14 09:55:54.658876391 +0000 UTC m=+3539.647116766" observedRunningTime="2026-03-14 09:55:55.329192368 +0000 UTC m=+3540.317432743" watchObservedRunningTime="2026-03-14 09:55:55.331053663 +0000 UTC m=+3540.319294038" Mar 14 09:55:56 crc kubenswrapper[4687]: I0314 09:55:56.290382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerStarted","Data":"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814"} Mar 14 09:55:56 crc kubenswrapper[4687]: I0314 09:55:56.322504 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tmnj5" podStartSLOduration=2.818659802 podStartE2EDuration="5.322472372s" podCreationTimestamp="2026-03-14 09:55:51 +0000 UTC" firstStartedPulling="2026-03-14 09:55:53.242755139 +0000 UTC m=+3538.230995504" lastFinishedPulling="2026-03-14 09:55:55.746567699 +0000 UTC m=+3540.734808074" observedRunningTime="2026-03-14 09:55:56.310097778 +0000 UTC m=+3541.298338183" watchObservedRunningTime="2026-03-14 09:55:56.322472372 +0000 UTC m=+3541.310712787" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.142702 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558036-7fp4d"] Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.146015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.147915 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.148297 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.148641 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.155687 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-7fp4d"] Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.309687 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65m54\" (UniqueName: \"kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54\") pod \"auto-csr-approver-29558036-7fp4d\" (UID: \"62ef0181-7084-48b9-8589-e19b2cd4498d\") " pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.411783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65m54\" (UniqueName: \"kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54\") pod \"auto-csr-approver-29558036-7fp4d\" (UID: \"62ef0181-7084-48b9-8589-e19b2cd4498d\") " pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.432410 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65m54\" (UniqueName: \"kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54\") pod \"auto-csr-approver-29558036-7fp4d\" (UID: \"62ef0181-7084-48b9-8589-e19b2cd4498d\") " pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.463704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.737149 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:00 crc kubenswrapper[4687]: E0314 09:56:00.737749 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:56:00 crc kubenswrapper[4687]: I0314 09:56:00.915609 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-7fp4d"] Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.332640 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" event={"ID":"62ef0181-7084-48b9-8589-e19b2cd4498d","Type":"ContainerStarted","Data":"453a4fc3a50d77edf1c3f69f6cd584f34134d22a9a38205b2aedfe0ac4fbf098"} Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.335975 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.336651 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.384206 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.941291 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.941842 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:01 crc kubenswrapper[4687]: I0314 09:56:01.997373 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:02 crc kubenswrapper[4687]: I0314 09:56:02.343291 4687 generic.go:334] "Generic (PLEG): container finished" podID="62ef0181-7084-48b9-8589-e19b2cd4498d" containerID="15ba539d79e7e4513e675ab8a8fdf1fb75e7a408abfd0d342dbba978b6cd7239" exitCode=0 Mar 14 09:56:02 crc kubenswrapper[4687]: I0314 09:56:02.343483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" event={"ID":"62ef0181-7084-48b9-8589-e19b2cd4498d","Type":"ContainerDied","Data":"15ba539d79e7e4513e675ab8a8fdf1fb75e7a408abfd0d342dbba978b6cd7239"} Mar 14 09:56:02 crc kubenswrapper[4687]: I0314 09:56:02.390609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:02 crc kubenswrapper[4687]: I0314 09:56:02.405931 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:02 crc kubenswrapper[4687]: I0314 09:56:02.737092 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:56:02 crc kubenswrapper[4687]: E0314 09:56:02.737586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:56:03 crc kubenswrapper[4687]: I0314 09:56:03.022702 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:56:03 crc kubenswrapper[4687]: I0314 09:56:03.685017 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:03 crc kubenswrapper[4687]: I0314 09:56:03.876653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65m54\" (UniqueName: \"kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54\") pod \"62ef0181-7084-48b9-8589-e19b2cd4498d\" (UID: \"62ef0181-7084-48b9-8589-e19b2cd4498d\") " Mar 14 09:56:03 crc kubenswrapper[4687]: I0314 09:56:03.882794 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54" (OuterVolumeSpecName: "kube-api-access-65m54") pod "62ef0181-7084-48b9-8589-e19b2cd4498d" (UID: "62ef0181-7084-48b9-8589-e19b2cd4498d"). InnerVolumeSpecName "kube-api-access-65m54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:56:03 crc kubenswrapper[4687]: I0314 09:56:03.979462 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65m54\" (UniqueName: \"kubernetes.io/projected/62ef0181-7084-48b9-8589-e19b2cd4498d-kube-api-access-65m54\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.359918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" event={"ID":"62ef0181-7084-48b9-8589-e19b2cd4498d","Type":"ContainerDied","Data":"453a4fc3a50d77edf1c3f69f6cd584f34134d22a9a38205b2aedfe0ac4fbf098"} Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.360004 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453a4fc3a50d77edf1c3f69f6cd584f34134d22a9a38205b2aedfe0ac4fbf098" Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.359960 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-7fp4d" Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.360289 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tmnj5" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="registry-server" containerID="cri-o://2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814" gracePeriod=2 Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.422208 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.755878 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-vsh4v"] Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.761498 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-vsh4v"] Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.859478 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.998770 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content\") pod \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.998844 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhg2k\" (UniqueName: \"kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k\") pod \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " Mar 14 09:56:04 crc kubenswrapper[4687]: I0314 09:56:04.998922 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities\") pod \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\" (UID: \"5f0e54a9-4692-4ae6-9df5-b0c839361e54\") " Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.000914 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities" (OuterVolumeSpecName: "utilities") pod "5f0e54a9-4692-4ae6-9df5-b0c839361e54" (UID: "5f0e54a9-4692-4ae6-9df5-b0c839361e54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.021548 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k" (OuterVolumeSpecName: "kube-api-access-bhg2k") pod "5f0e54a9-4692-4ae6-9df5-b0c839361e54" (UID: "5f0e54a9-4692-4ae6-9df5-b0c839361e54"). InnerVolumeSpecName "kube-api-access-bhg2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.025685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f0e54a9-4692-4ae6-9df5-b0c839361e54" (UID: "5f0e54a9-4692-4ae6-9df5-b0c839361e54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.102509 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.102551 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhg2k\" (UniqueName: \"kubernetes.io/projected/5f0e54a9-4692-4ae6-9df5-b0c839361e54-kube-api-access-bhg2k\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.102564 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0e54a9-4692-4ae6-9df5-b0c839361e54-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.370499 4687 generic.go:334] "Generic (PLEG): container finished" podID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerID="2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814" exitCode=0 Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.371000 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9lhtr" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="registry-server" containerID="cri-o://c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6" gracePeriod=2 Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.370558 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tmnj5" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.370573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerDied","Data":"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814"} Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.371297 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tmnj5" event={"ID":"5f0e54a9-4692-4ae6-9df5-b0c839361e54","Type":"ContainerDied","Data":"41d2adc661f012e624dc312c74941a4c892e2488ecc75a05c0fad4ce4a26b568"} Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.371321 4687 scope.go:117] "RemoveContainer" containerID="2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.391518 4687 scope.go:117] "RemoveContainer" containerID="ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.410717 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.414548 4687 scope.go:117] "RemoveContainer" containerID="9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.422939 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tmnj5"] Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.564908 4687 scope.go:117] "RemoveContainer" containerID="2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814" Mar 14 09:56:05 crc kubenswrapper[4687]: E0314 09:56:05.569897 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814\": container with ID starting with 2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814 not found: ID does not exist" containerID="2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.569950 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814"} err="failed to get container status \"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814\": rpc error: code = NotFound desc = could not find container \"2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814\": container with ID starting with 2ff7eb9baf7cd0b70c142966860922cb30388068946561f47a52277836963814 not found: ID does not exist" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.569985 4687 scope.go:117] "RemoveContainer" containerID="ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71" Mar 14 09:56:05 crc kubenswrapper[4687]: E0314 09:56:05.570392 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71\": container with ID starting with ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71 not found: ID does not exist" containerID="ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.570422 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71"} err="failed to get container status \"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71\": rpc error: code = NotFound desc = could not find container \"ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71\": container with ID starting with ede9c18568f4371b6d3deab2d9bfa7c37f13a5c745fa17f2c1a245d468a82c71 not found: ID does not exist" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.570459 4687 scope.go:117] "RemoveContainer" containerID="9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc" Mar 14 09:56:05 crc kubenswrapper[4687]: E0314 09:56:05.570839 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc\": container with ID starting with 9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc not found: ID does not exist" containerID="9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.570879 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc"} err="failed to get container status \"9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc\": rpc error: code = NotFound desc = could not find container \"9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc\": container with ID starting with 9f301c1cbe4ae773c78c331a9a281adc7e5be19a9491b3c9ed6ed5a78f3e59bc not found: ID does not exist" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.719004 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.751156 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f0c21c-d464-4362-aa44-713697871f0b" path="/var/lib/kubelet/pods/08f0c21c-d464-4362-aa44-713697871f0b/volumes" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.751822 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" path="/var/lib/kubelet/pods/5f0e54a9-4692-4ae6-9df5-b0c839361e54/volumes" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.817053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjr9\" (UniqueName: \"kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9\") pod \"a986ee1c-e47d-4524-9723-0009f51c1133\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.817173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities\") pod \"a986ee1c-e47d-4524-9723-0009f51c1133\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.817491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content\") pod \"a986ee1c-e47d-4524-9723-0009f51c1133\" (UID: \"a986ee1c-e47d-4524-9723-0009f51c1133\") " Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.818102 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities" (OuterVolumeSpecName: "utilities") pod "a986ee1c-e47d-4524-9723-0009f51c1133" (UID: "a986ee1c-e47d-4524-9723-0009f51c1133"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.821441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9" (OuterVolumeSpecName: "kube-api-access-bwjr9") pod "a986ee1c-e47d-4524-9723-0009f51c1133" (UID: "a986ee1c-e47d-4524-9723-0009f51c1133"). InnerVolumeSpecName "kube-api-access-bwjr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.875461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a986ee1c-e47d-4524-9723-0009f51c1133" (UID: "a986ee1c-e47d-4524-9723-0009f51c1133"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.918969 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.919009 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwjr9\" (UniqueName: \"kubernetes.io/projected/a986ee1c-e47d-4524-9723-0009f51c1133-kube-api-access-bwjr9\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:05 crc kubenswrapper[4687]: I0314 09:56:05.919022 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a986ee1c-e47d-4524-9723-0009f51c1133-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.415446 4687 generic.go:334] "Generic (PLEG): container finished" podID="a986ee1c-e47d-4524-9723-0009f51c1133" containerID="c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6" exitCode=0 Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.415520 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lhtr" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.415533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerDied","Data":"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6"} Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.415644 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lhtr" event={"ID":"a986ee1c-e47d-4524-9723-0009f51c1133","Type":"ContainerDied","Data":"c022cdb115811664d7d1558023acf311e119881992acbb6de8decd891305e1bf"} Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.415679 4687 scope.go:117] "RemoveContainer" containerID="c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.451045 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.458465 4687 scope.go:117] "RemoveContainer" containerID="01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.459385 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9lhtr"] Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.484510 4687 scope.go:117] "RemoveContainer" containerID="31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.528549 4687 scope.go:117] "RemoveContainer" containerID="c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6" Mar 14 09:56:06 crc kubenswrapper[4687]: E0314 09:56:06.529172 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6\": container with ID starting with c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6 not found: ID does not exist" containerID="c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.529221 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6"} err="failed to get container status \"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6\": rpc error: code = NotFound desc = could not find container \"c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6\": container with ID starting with c466b909d0e79774308f98d4c49b057f57e1196f6cb07b5150b8960cff9852d6 not found: ID does not exist" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.529247 4687 scope.go:117] "RemoveContainer" containerID="01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767" Mar 14 09:56:06 crc kubenswrapper[4687]: E0314 09:56:06.529599 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767\": container with ID starting with 01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767 not found: ID does not exist" containerID="01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.529710 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767"} err="failed to get container status \"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767\": rpc error: code = NotFound desc = could not find container \"01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767\": container with ID starting with 01dbe4cd9dce7a66d7e035c57df0246105108d8b5402f423a218b1fe391c4767 not found: ID does not exist" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.529753 4687 scope.go:117] "RemoveContainer" containerID="31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d" Mar 14 09:56:06 crc kubenswrapper[4687]: E0314 09:56:06.530483 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d\": container with ID starting with 31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d not found: ID does not exist" containerID="31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d" Mar 14 09:56:06 crc kubenswrapper[4687]: I0314 09:56:06.530523 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d"} err="failed to get container status \"31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d\": rpc error: code = NotFound desc = could not find container \"31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d\": container with ID starting with 31b64c793dd334f4602f8ce14b17096fd3eae3762d0e2b072201b3b197d9585d not found: ID does not exist" Mar 14 09:56:07 crc kubenswrapper[4687]: I0314 09:56:07.747928 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" path="/var/lib/kubelet/pods/a986ee1c-e47d-4524-9723-0009f51c1133/volumes" Mar 14 09:56:11 crc kubenswrapper[4687]: I0314 09:56:11.745992 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:11 crc kubenswrapper[4687]: E0314 09:56:11.746909 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:56:15 crc kubenswrapper[4687]: I0314 09:56:15.746081 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:56:15 crc kubenswrapper[4687]: E0314 09:56:15.746588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:56:23 crc kubenswrapper[4687]: I0314 09:56:23.737476 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:23 crc kubenswrapper[4687]: E0314 09:56:23.738165 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:56:27 crc kubenswrapper[4687]: I0314 09:56:27.737300 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:56:27 crc kubenswrapper[4687]: E0314 09:56:27.738156 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:56:30 crc kubenswrapper[4687]: I0314 09:56:30.298491 4687 scope.go:117] "RemoveContainer" containerID="4a05b5ef6e6bf506c963f5724800ef716cf437947271cace3f0eecdc25d7636b" Mar 14 09:56:35 crc kubenswrapper[4687]: I0314 09:56:35.745739 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:35 crc kubenswrapper[4687]: E0314 09:56:35.746649 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:56:38 crc kubenswrapper[4687]: I0314 09:56:38.737289 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:56:38 crc kubenswrapper[4687]: E0314 09:56:38.737807 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:56:46 crc kubenswrapper[4687]: I0314 09:56:46.737482 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:46 crc kubenswrapper[4687]: E0314 09:56:46.738152 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:56:50 crc kubenswrapper[4687]: I0314 09:56:50.737741 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:56:50 crc kubenswrapper[4687]: E0314 09:56:50.738515 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:56:57 crc kubenswrapper[4687]: I0314 09:56:57.737107 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:56:57 crc kubenswrapper[4687]: E0314 09:56:57.737919 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:01 crc kubenswrapper[4687]: I0314 09:57:01.737147 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:57:01 crc kubenswrapper[4687]: E0314 09:57:01.738059 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:57:09 crc kubenswrapper[4687]: I0314 09:57:09.737603 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:57:09 crc kubenswrapper[4687]: E0314 09:57:09.738180 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:15 crc kubenswrapper[4687]: I0314 09:57:15.747106 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:57:15 crc kubenswrapper[4687]: E0314 09:57:15.747872 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:57:22 crc kubenswrapper[4687]: I0314 09:57:22.736909 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:57:22 crc kubenswrapper[4687]: E0314 09:57:22.737577 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:24 crc kubenswrapper[4687]: I0314 09:57:24.111655 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:57:24 crc kubenswrapper[4687]: I0314 09:57:24.111984 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:57:28 crc kubenswrapper[4687]: I0314 09:57:28.736844 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:57:28 crc kubenswrapper[4687]: E0314 09:57:28.737656 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:57:34 crc kubenswrapper[4687]: I0314 09:57:34.737927 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:57:35 crc kubenswrapper[4687]: I0314 09:57:35.257815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7"} Mar 14 09:57:39 crc kubenswrapper[4687]: I0314 09:57:39.737728 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:57:39 crc kubenswrapper[4687]: E0314 09:57:39.738352 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:57:42 crc kubenswrapper[4687]: I0314 09:57:42.220476 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:57:42 crc kubenswrapper[4687]: I0314 09:57:42.220853 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:57:43 crc kubenswrapper[4687]: I0314 09:57:43.335407 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" exitCode=1 Mar 14 09:57:43 crc kubenswrapper[4687]: I0314 09:57:43.335490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7"} Mar 14 09:57:43 crc kubenswrapper[4687]: I0314 09:57:43.335757 4687 scope.go:117] "RemoveContainer" containerID="1f760b36967a65868dd5301aafc892dd02d76763316fd2bc568a9b65aa5d6fb8" Mar 14 09:57:43 crc kubenswrapper[4687]: I0314 09:57:43.336786 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:57:43 crc kubenswrapper[4687]: E0314 09:57:43.336999 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:52 crc kubenswrapper[4687]: I0314 09:57:52.220521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:57:52 crc kubenswrapper[4687]: I0314 09:57:52.221313 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 09:57:52 crc kubenswrapper[4687]: I0314 09:57:52.223090 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:57:52 crc kubenswrapper[4687]: E0314 09:57:52.223385 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:52 crc kubenswrapper[4687]: I0314 09:57:52.487392 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:57:52 crc kubenswrapper[4687]: E0314 09:57:52.487625 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:57:53 crc kubenswrapper[4687]: I0314 09:57:53.737429 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:57:54 crc kubenswrapper[4687]: I0314 09:57:54.111159 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:57:54 crc kubenswrapper[4687]: I0314 09:57:54.111497 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:57:54 crc kubenswrapper[4687]: I0314 09:57:54.507439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb"} Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.145218 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558038-9h8jr"] Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146190 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146205 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146222 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ef0181-7084-48b9-8589-e19b2cd4498d" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146228 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ef0181-7084-48b9-8589-e19b2cd4498d" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146241 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146247 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146269 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146275 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146296 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146301 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146316 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146322 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4687]: E0314 09:58:00.146361 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146368 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146573 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ef0181-7084-48b9-8589-e19b2cd4498d" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0e54a9-4692-4ae6-9df5-b0c839361e54" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.146603 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a986ee1c-e47d-4524-9723-0009f51c1133" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.147320 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.149201 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.149449 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.149630 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.156473 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-9h8jr"] Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.175667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fmm\" (UniqueName: \"kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm\") pod \"auto-csr-approver-29558038-9h8jr\" (UID: \"2e324e3d-a37d-45c3-a733-609f5503aea9\") " pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.278521 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6fmm\" (UniqueName: \"kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm\") pod \"auto-csr-approver-29558038-9h8jr\" (UID: \"2e324e3d-a37d-45c3-a733-609f5503aea9\") " pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.296489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6fmm\" (UniqueName: \"kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm\") pod \"auto-csr-approver-29558038-9h8jr\" (UID: \"2e324e3d-a37d-45c3-a733-609f5503aea9\") " pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.468851 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.968730 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:58:00 crc kubenswrapper[4687]: I0314 09:58:00.976559 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-9h8jr"] Mar 14 09:58:01 crc kubenswrapper[4687]: I0314 09:58:01.569084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" event={"ID":"2e324e3d-a37d-45c3-a733-609f5503aea9","Type":"ContainerStarted","Data":"e27322d3d7fb4919c36e6634d5680ace114b209b3d1d28f3ec3bc7c70ac1f0c1"} Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.127881 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.128216 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.580216 4687 generic.go:334] "Generic (PLEG): container finished" podID="2e324e3d-a37d-45c3-a733-609f5503aea9" containerID="cccddcf1dc027c97429f1a9335c02824c57e0cc87e1047cef7d0ca0bfe5c51ca" exitCode=0 Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.580283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" event={"ID":"2e324e3d-a37d-45c3-a733-609f5503aea9","Type":"ContainerDied","Data":"cccddcf1dc027c97429f1a9335c02824c57e0cc87e1047cef7d0ca0bfe5c51ca"} Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.582371 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" exitCode=1 Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.582406 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb"} Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.582436 4687 scope.go:117] "RemoveContainer" containerID="af065ae0e952f80a3f725d4ddc95842a56c64d9dc40f64501fd54d1490aa8946" Mar 14 09:58:02 crc kubenswrapper[4687]: I0314 09:58:02.583138 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:58:02 crc kubenswrapper[4687]: E0314 09:58:02.583505 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:58:03 crc kubenswrapper[4687]: I0314 09:58:03.945973 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.072225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6fmm\" (UniqueName: \"kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm\") pod \"2e324e3d-a37d-45c3-a733-609f5503aea9\" (UID: \"2e324e3d-a37d-45c3-a733-609f5503aea9\") " Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.077595 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm" (OuterVolumeSpecName: "kube-api-access-g6fmm") pod "2e324e3d-a37d-45c3-a733-609f5503aea9" (UID: "2e324e3d-a37d-45c3-a733-609f5503aea9"). InnerVolumeSpecName "kube-api-access-g6fmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.175652 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6fmm\" (UniqueName: \"kubernetes.io/projected/2e324e3d-a37d-45c3-a733-609f5503aea9-kube-api-access-g6fmm\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.604828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" event={"ID":"2e324e3d-a37d-45c3-a733-609f5503aea9","Type":"ContainerDied","Data":"e27322d3d7fb4919c36e6634d5680ace114b209b3d1d28f3ec3bc7c70ac1f0c1"} Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.604878 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-9h8jr" Mar 14 09:58:04 crc kubenswrapper[4687]: I0314 09:58:04.604901 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27322d3d7fb4919c36e6634d5680ace114b209b3d1d28f3ec3bc7c70ac1f0c1" Mar 14 09:58:05 crc kubenswrapper[4687]: I0314 09:58:05.025764 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-7r79f"] Mar 14 09:58:05 crc kubenswrapper[4687]: I0314 09:58:05.036776 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-7r79f"] Mar 14 09:58:05 crc kubenswrapper[4687]: I0314 09:58:05.742937 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:58:05 crc kubenswrapper[4687]: E0314 09:58:05.743179 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:58:05 crc kubenswrapper[4687]: I0314 09:58:05.748971 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99535b5-bd4a-4f73-baf2-daf532890ea9" path="/var/lib/kubelet/pods/f99535b5-bd4a-4f73-baf2-daf532890ea9/volumes" Mar 14 09:58:12 crc kubenswrapper[4687]: I0314 09:58:12.128246 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:58:12 crc kubenswrapper[4687]: I0314 09:58:12.128689 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 09:58:12 crc kubenswrapper[4687]: I0314 09:58:12.129486 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:58:12 crc kubenswrapper[4687]: E0314 09:58:12.129751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:58:18 crc kubenswrapper[4687]: I0314 09:58:18.737968 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:58:18 crc kubenswrapper[4687]: E0314 09:58:18.738802 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:58:22 crc kubenswrapper[4687]: I0314 09:58:22.737179 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:58:22 crc kubenswrapper[4687]: E0314 09:58:22.737755 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.111378 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.111747 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.111800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.112799 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.112872 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" gracePeriod=600 Mar 14 09:58:24 crc kubenswrapper[4687]: E0314 09:58:24.254152 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.802212 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" exitCode=0 Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.802256 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5"} Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.802292 4687 scope.go:117] "RemoveContainer" containerID="94bc3afe4e933b563a6b321cd8c2fa322550cd4dfc9e33a5c9d33facd98e5f8c" Mar 14 09:58:24 crc kubenswrapper[4687]: I0314 09:58:24.803143 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:58:24 crc kubenswrapper[4687]: E0314 09:58:24.803524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:58:29 crc kubenswrapper[4687]: I0314 09:58:29.737062 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:58:29 crc kubenswrapper[4687]: E0314 09:58:29.737789 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:58:30 crc kubenswrapper[4687]: I0314 09:58:30.412728 4687 scope.go:117] "RemoveContainer" containerID="b76c45705d4e1a4d7130de457f1f5009d7e9c9e888550fcd7fc6441ff66ef383" Mar 14 09:58:34 crc kubenswrapper[4687]: I0314 09:58:34.736650 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:58:34 crc kubenswrapper[4687]: E0314 09:58:34.737459 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:58:38 crc kubenswrapper[4687]: I0314 09:58:38.737934 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:58:38 crc kubenswrapper[4687]: E0314 09:58:38.738677 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:58:43 crc kubenswrapper[4687]: I0314 09:58:43.737657 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:58:43 crc kubenswrapper[4687]: E0314 09:58:43.738923 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:58:48 crc kubenswrapper[4687]: I0314 09:58:48.737543 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:58:48 crc kubenswrapper[4687]: E0314 09:58:48.738064 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:58:53 crc kubenswrapper[4687]: I0314 09:58:53.737136 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:58:53 crc kubenswrapper[4687]: E0314 09:58:53.737928 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:58:54 crc kubenswrapper[4687]: I0314 09:58:54.737409 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:58:54 crc kubenswrapper[4687]: E0314 09:58:54.738095 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:59:02 crc kubenswrapper[4687]: I0314 09:59:02.739513 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:59:02 crc kubenswrapper[4687]: E0314 09:59:02.740578 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:59:06 crc kubenswrapper[4687]: I0314 09:59:06.737065 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:59:06 crc kubenswrapper[4687]: E0314 09:59:06.737853 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:59:09 crc kubenswrapper[4687]: I0314 09:59:09.737321 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:59:09 crc kubenswrapper[4687]: E0314 09:59:09.737793 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:59:15 crc kubenswrapper[4687]: I0314 09:59:15.747159 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:59:15 crc kubenswrapper[4687]: E0314 09:59:15.748122 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:59:21 crc kubenswrapper[4687]: I0314 09:59:21.736971 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:59:21 crc kubenswrapper[4687]: E0314 09:59:21.737946 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:59:24 crc kubenswrapper[4687]: I0314 09:59:24.736430 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:59:24 crc kubenswrapper[4687]: E0314 09:59:24.736939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:59:26 crc kubenswrapper[4687]: I0314 09:59:26.736590 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:59:26 crc kubenswrapper[4687]: E0314 09:59:26.737039 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:59:34 crc kubenswrapper[4687]: I0314 09:59:34.736771 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:59:34 crc kubenswrapper[4687]: E0314 09:59:34.737739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:59:36 crc kubenswrapper[4687]: I0314 09:59:36.737292 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:59:36 crc kubenswrapper[4687]: E0314 09:59:36.738190 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:59:37 crc kubenswrapper[4687]: I0314 09:59:37.736738 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:59:37 crc kubenswrapper[4687]: E0314 09:59:37.737014 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 09:59:49 crc kubenswrapper[4687]: I0314 09:59:49.737189 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 09:59:49 crc kubenswrapper[4687]: E0314 09:59:49.738055 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 09:59:50 crc kubenswrapper[4687]: I0314 09:59:50.737494 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 09:59:50 crc kubenswrapper[4687]: E0314 09:59:50.737840 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 09:59:52 crc kubenswrapper[4687]: I0314 09:59:52.737210 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 09:59:52 crc kubenswrapper[4687]: E0314 09:59:52.738213 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.155028 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6"] Mar 14 10:00:00 crc kubenswrapper[4687]: E0314 10:00:00.156215 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e324e3d-a37d-45c3-a733-609f5503aea9" containerName="oc" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.156233 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e324e3d-a37d-45c3-a733-609f5503aea9" containerName="oc" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.156587 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e324e3d-a37d-45c3-a733-609f5503aea9" containerName="oc" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.157460 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.160404 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.160415 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.167087 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558040-gr56b"] Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.169657 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.172604 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.173165 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.173570 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.176620 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-gr56b"] Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.185799 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6"] Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.260712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8sp\" (UniqueName: \"kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp\") pod \"auto-csr-approver-29558040-gr56b\" (UID: \"fb524d62-fef9-4b18-a14e-e8558e70a40c\") " pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.260998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2kf\" (UniqueName: \"kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.261078 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.261110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.363185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.363250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.363421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8sp\" (UniqueName: \"kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp\") pod \"auto-csr-approver-29558040-gr56b\" (UID: \"fb524d62-fef9-4b18-a14e-e8558e70a40c\") " pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.363451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2kf\" (UniqueName: \"kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.364302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.371687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.379531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2kf\" (UniqueName: \"kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf\") pod \"collect-profiles-29558040-llcb6\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.384783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8sp\" (UniqueName: \"kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp\") pod \"auto-csr-approver-29558040-gr56b\" (UID: \"fb524d62-fef9-4b18-a14e-e8558e70a40c\") " pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.486434 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.508321 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.918725 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-gr56b"] Mar 14 10:00:00 crc kubenswrapper[4687]: W0314 10:00:00.926180 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb524d62_fef9_4b18_a14e_e8558e70a40c.slice/crio-6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb WatchSource:0}: Error finding container 6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb: Status 404 returned error can't find the container with id 6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb Mar 14 10:00:00 crc kubenswrapper[4687]: I0314 10:00:00.951500 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6"] Mar 14 10:00:00 crc kubenswrapper[4687]: W0314 10:00:00.958622 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb53136b2_a076_4318_8544_2bbc07036490.slice/crio-6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d WatchSource:0}: Error finding container 6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d: Status 404 returned error can't find the container with id 6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d Mar 14 10:00:01 crc kubenswrapper[4687]: I0314 10:00:01.736794 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:00:01 crc kubenswrapper[4687]: E0314 10:00:01.737603 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:00:01 crc kubenswrapper[4687]: I0314 10:00:01.767298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-gr56b" event={"ID":"fb524d62-fef9-4b18-a14e-e8558e70a40c","Type":"ContainerStarted","Data":"6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb"} Mar 14 10:00:01 crc kubenswrapper[4687]: I0314 10:00:01.769223 4687 generic.go:334] "Generic (PLEG): container finished" podID="b53136b2-a076-4318-8544-2bbc07036490" containerID="1c1a3ee535598ab15faf6e89069ffb1212c739deac9de557392e00828a11e392" exitCode=0 Mar 14 10:00:01 crc kubenswrapper[4687]: I0314 10:00:01.769269 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" event={"ID":"b53136b2-a076-4318-8544-2bbc07036490","Type":"ContainerDied","Data":"1c1a3ee535598ab15faf6e89069ffb1212c739deac9de557392e00828a11e392"} Mar 14 10:00:01 crc kubenswrapper[4687]: I0314 10:00:01.769298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" event={"ID":"b53136b2-a076-4318-8544-2bbc07036490","Type":"ContainerStarted","Data":"6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d"} Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.106432 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.219200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume\") pod \"b53136b2-a076-4318-8544-2bbc07036490\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.219378 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m2kf\" (UniqueName: \"kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf\") pod \"b53136b2-a076-4318-8544-2bbc07036490\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.219554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume\") pod \"b53136b2-a076-4318-8544-2bbc07036490\" (UID: \"b53136b2-a076-4318-8544-2bbc07036490\") " Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.220061 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume" (OuterVolumeSpecName: "config-volume") pod "b53136b2-a076-4318-8544-2bbc07036490" (UID: "b53136b2-a076-4318-8544-2bbc07036490"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.220533 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53136b2-a076-4318-8544-2bbc07036490-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.225492 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf" (OuterVolumeSpecName: "kube-api-access-4m2kf") pod "b53136b2-a076-4318-8544-2bbc07036490" (UID: "b53136b2-a076-4318-8544-2bbc07036490"). InnerVolumeSpecName "kube-api-access-4m2kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.227491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b53136b2-a076-4318-8544-2bbc07036490" (UID: "b53136b2-a076-4318-8544-2bbc07036490"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.324739 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m2kf\" (UniqueName: \"kubernetes.io/projected/b53136b2-a076-4318-8544-2bbc07036490-kube-api-access-4m2kf\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.324772 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b53136b2-a076-4318-8544-2bbc07036490-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.787537 4687 generic.go:334] "Generic (PLEG): container finished" podID="fb524d62-fef9-4b18-a14e-e8558e70a40c" containerID="8121fe1a889a33bc54739366bacfe185e00b9b480097170686d06f9512bee6fa" exitCode=0 Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.787604 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-gr56b" event={"ID":"fb524d62-fef9-4b18-a14e-e8558e70a40c","Type":"ContainerDied","Data":"8121fe1a889a33bc54739366bacfe185e00b9b480097170686d06f9512bee6fa"} Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.791117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" event={"ID":"b53136b2-a076-4318-8544-2bbc07036490","Type":"ContainerDied","Data":"6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d"} Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.791157 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c168622baab4b5963bdfde893cc58170622be8248fc56122082bef89eb27a4d" Mar 14 10:00:03 crc kubenswrapper[4687]: I0314 10:00:03.791174 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-llcb6" Mar 14 10:00:04 crc kubenswrapper[4687]: I0314 10:00:04.183751 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2"] Mar 14 10:00:04 crc kubenswrapper[4687]: I0314 10:00:04.191347 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-zlrp2"] Mar 14 10:00:04 crc kubenswrapper[4687]: I0314 10:00:04.737484 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:00:04 crc kubenswrapper[4687]: E0314 10:00:04.737829 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.187064 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.266933 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr8sp\" (UniqueName: \"kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp\") pod \"fb524d62-fef9-4b18-a14e-e8558e70a40c\" (UID: \"fb524d62-fef9-4b18-a14e-e8558e70a40c\") " Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.274534 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp" (OuterVolumeSpecName: "kube-api-access-jr8sp") pod "fb524d62-fef9-4b18-a14e-e8558e70a40c" (UID: "fb524d62-fef9-4b18-a14e-e8558e70a40c"). InnerVolumeSpecName "kube-api-access-jr8sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.369260 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr8sp\" (UniqueName: \"kubernetes.io/projected/fb524d62-fef9-4b18-a14e-e8558e70a40c-kube-api-access-jr8sp\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.755098 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a" path="/var/lib/kubelet/pods/4f3b3f76-8697-4ee3-8a22-0ed7c8dd234a/volumes" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.814983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-gr56b" event={"ID":"fb524d62-fef9-4b18-a14e-e8558e70a40c","Type":"ContainerDied","Data":"6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb"} Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.815053 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed34bea9acbcf2db4de90c4d674e68d29b627a52f95d1635cb735f84ee900fb" Mar 14 10:00:05 crc kubenswrapper[4687]: I0314 10:00:05.815104 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-gr56b" Mar 14 10:00:06 crc kubenswrapper[4687]: I0314 10:00:06.257459 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-mfqj5"] Mar 14 10:00:06 crc kubenswrapper[4687]: I0314 10:00:06.265223 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-mfqj5"] Mar 14 10:00:06 crc kubenswrapper[4687]: I0314 10:00:06.737324 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:00:06 crc kubenswrapper[4687]: E0314 10:00:06.737637 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:07 crc kubenswrapper[4687]: I0314 10:00:07.753795 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5263ffa9-c4aa-46f1-a90e-95d79cd35119" path="/var/lib/kubelet/pods/5263ffa9-c4aa-46f1-a90e-95d79cd35119/volumes" Mar 14 10:00:15 crc kubenswrapper[4687]: I0314 10:00:15.747812 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:00:15 crc kubenswrapper[4687]: E0314 10:00:15.748839 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:00:16 crc kubenswrapper[4687]: I0314 10:00:16.737146 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:00:16 crc kubenswrapper[4687]: E0314 10:00:16.737458 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:00:17 crc kubenswrapper[4687]: I0314 10:00:17.736906 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:00:17 crc kubenswrapper[4687]: E0314 10:00:17.737303 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:28 crc kubenswrapper[4687]: I0314 10:00:28.758659 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:00:28 crc kubenswrapper[4687]: E0314 10:00:28.759523 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.496415 4687 scope.go:117] "RemoveContainer" containerID="8d1c9066d7313299219daec14821815b12352301e578bb7d56c12b70a2be0015" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.549623 4687 scope.go:117] "RemoveContainer" containerID="ffc606fabedbc5c4e468966c010b5c9b7c2a7b3db367742eb925fdf0ba1188c3" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.569106 4687 scope.go:117] "RemoveContainer" containerID="5ff888c6858ec98e845cd3bcdd76428e1e25f452e2485581f86c919dd1fdb459" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.587122 4687 scope.go:117] "RemoveContainer" containerID="6cce6377159fb7da4fcef3947c369654ec5ebfecbb2e9a3333c0d502006250ce" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.642896 4687 scope.go:117] "RemoveContainer" containerID="0119f18a331e35bbc996e6ea7538c456f99ed36a37613ee3d8e2dccbbeba6946" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.737510 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:00:30 crc kubenswrapper[4687]: E0314 10:00:30.737884 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:00:30 crc kubenswrapper[4687]: I0314 10:00:30.738559 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:00:30 crc kubenswrapper[4687]: E0314 10:00:30.738801 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:00:42 crc kubenswrapper[4687]: I0314 10:00:42.737018 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:00:42 crc kubenswrapper[4687]: E0314 10:00:42.737734 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:42 crc kubenswrapper[4687]: I0314 10:00:42.739286 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:00:42 crc kubenswrapper[4687]: E0314 10:00:42.739778 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:00:43 crc kubenswrapper[4687]: I0314 10:00:43.736893 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:00:43 crc kubenswrapper[4687]: E0314 10:00:43.737209 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:00:53 crc kubenswrapper[4687]: I0314 10:00:53.737369 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:00:53 crc kubenswrapper[4687]: E0314 10:00:53.738544 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:00:57 crc kubenswrapper[4687]: I0314 10:00:57.737006 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:00:57 crc kubenswrapper[4687]: I0314 10:00:57.737682 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:00:57 crc kubenswrapper[4687]: E0314 10:00:57.737814 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:00:57 crc kubenswrapper[4687]: E0314 10:00:57.737929 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.144149 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29558041-tgrn9"] Mar 14 10:01:00 crc kubenswrapper[4687]: E0314 10:01:00.145003 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53136b2-a076-4318-8544-2bbc07036490" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.145016 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53136b2-a076-4318-8544-2bbc07036490" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4687]: E0314 10:01:00.145035 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb524d62-fef9-4b18-a14e-e8558e70a40c" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.145041 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb524d62-fef9-4b18-a14e-e8558e70a40c" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.145477 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53136b2-a076-4318-8544-2bbc07036490" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.145498 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb524d62-fef9-4b18-a14e-e8558e70a40c" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.146095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.172424 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29558041-tgrn9"] Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.286019 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.286171 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.286201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.286229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spn96\" (UniqueName: \"kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.388426 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.388477 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.388511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spn96\" (UniqueName: \"kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.388581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.394990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.395612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.399366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.404716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spn96\" (UniqueName: \"kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96\") pod \"keystone-cron-29558041-tgrn9\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:00 crc kubenswrapper[4687]: I0314 10:01:00.470730 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:01 crc kubenswrapper[4687]: I0314 10:01:01.059907 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29558041-tgrn9"] Mar 14 10:01:01 crc kubenswrapper[4687]: I0314 10:01:01.397314 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-tgrn9" event={"ID":"94953a0f-cefa-459d-9beb-b73414626765","Type":"ContainerStarted","Data":"d3b5bafafcad85e924e20735153b9b30bd7c85259606cf599ab9701313169b9e"} Mar 14 10:01:01 crc kubenswrapper[4687]: I0314 10:01:01.397850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-tgrn9" event={"ID":"94953a0f-cefa-459d-9beb-b73414626765","Type":"ContainerStarted","Data":"c6727fb135a513b5d87e6541061866eaf9769982775c4905ba8b0ca9643b4781"} Mar 14 10:01:01 crc kubenswrapper[4687]: I0314 10:01:01.415083 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29558041-tgrn9" podStartSLOduration=1.415060715 podStartE2EDuration="1.415060715s" podCreationTimestamp="2026-03-14 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 10:01:01.411355014 +0000 UTC m=+3846.399595389" watchObservedRunningTime="2026-03-14 10:01:01.415060715 +0000 UTC m=+3846.403301090" Mar 14 10:01:04 crc kubenswrapper[4687]: I0314 10:01:04.424519 4687 generic.go:334] "Generic (PLEG): container finished" podID="94953a0f-cefa-459d-9beb-b73414626765" containerID="d3b5bafafcad85e924e20735153b9b30bd7c85259606cf599ab9701313169b9e" exitCode=0 Mar 14 10:01:04 crc kubenswrapper[4687]: I0314 10:01:04.424586 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-tgrn9" event={"ID":"94953a0f-cefa-459d-9beb-b73414626765","Type":"ContainerDied","Data":"d3b5bafafcad85e924e20735153b9b30bd7c85259606cf599ab9701313169b9e"} Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.746254 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:01:05 crc kubenswrapper[4687]: E0314 10:01:05.746926 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.797307 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.885657 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data\") pod \"94953a0f-cefa-459d-9beb-b73414626765\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.885712 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spn96\" (UniqueName: \"kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96\") pod \"94953a0f-cefa-459d-9beb-b73414626765\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.885918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle\") pod \"94953a0f-cefa-459d-9beb-b73414626765\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.885971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys\") pod \"94953a0f-cefa-459d-9beb-b73414626765\" (UID: \"94953a0f-cefa-459d-9beb-b73414626765\") " Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.890750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "94953a0f-cefa-459d-9beb-b73414626765" (UID: "94953a0f-cefa-459d-9beb-b73414626765"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.891392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96" (OuterVolumeSpecName: "kube-api-access-spn96") pod "94953a0f-cefa-459d-9beb-b73414626765" (UID: "94953a0f-cefa-459d-9beb-b73414626765"). InnerVolumeSpecName "kube-api-access-spn96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.921021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94953a0f-cefa-459d-9beb-b73414626765" (UID: "94953a0f-cefa-459d-9beb-b73414626765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.946301 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data" (OuterVolumeSpecName: "config-data") pod "94953a0f-cefa-459d-9beb-b73414626765" (UID: "94953a0f-cefa-459d-9beb-b73414626765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.988244 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.988278 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.988286 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94953a0f-cefa-459d-9beb-b73414626765-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4687]: I0314 10:01:05.988295 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spn96\" (UniqueName: \"kubernetes.io/projected/94953a0f-cefa-459d-9beb-b73414626765-kube-api-access-spn96\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:06 crc kubenswrapper[4687]: I0314 10:01:06.442264 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-tgrn9" event={"ID":"94953a0f-cefa-459d-9beb-b73414626765","Type":"ContainerDied","Data":"c6727fb135a513b5d87e6541061866eaf9769982775c4905ba8b0ca9643b4781"} Mar 14 10:01:06 crc kubenswrapper[4687]: I0314 10:01:06.442304 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6727fb135a513b5d87e6541061866eaf9769982775c4905ba8b0ca9643b4781" Mar 14 10:01:06 crc kubenswrapper[4687]: I0314 10:01:06.442358 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-tgrn9" Mar 14 10:01:11 crc kubenswrapper[4687]: I0314 10:01:11.736923 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:01:11 crc kubenswrapper[4687]: E0314 10:01:11.737737 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:01:12 crc kubenswrapper[4687]: I0314 10:01:12.737550 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:01:12 crc kubenswrapper[4687]: E0314 10:01:12.738055 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:01:19 crc kubenswrapper[4687]: I0314 10:01:19.737258 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:01:19 crc kubenswrapper[4687]: E0314 10:01:19.737990 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:01:23 crc kubenswrapper[4687]: I0314 10:01:23.737445 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:01:23 crc kubenswrapper[4687]: E0314 10:01:23.738214 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:01:25 crc kubenswrapper[4687]: I0314 10:01:25.745020 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:01:25 crc kubenswrapper[4687]: E0314 10:01:25.745726 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:01:31 crc kubenswrapper[4687]: I0314 10:01:31.737111 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:01:31 crc kubenswrapper[4687]: E0314 10:01:31.737825 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:01:36 crc kubenswrapper[4687]: I0314 10:01:36.737696 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:01:36 crc kubenswrapper[4687]: E0314 10:01:36.738571 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:01:37 crc kubenswrapper[4687]: I0314 10:01:37.737540 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:01:37 crc kubenswrapper[4687]: E0314 10:01:37.738116 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:01:45 crc kubenswrapper[4687]: I0314 10:01:45.744796 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:01:45 crc kubenswrapper[4687]: E0314 10:01:45.745652 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:01:50 crc kubenswrapper[4687]: I0314 10:01:50.737374 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:01:50 crc kubenswrapper[4687]: E0314 10:01:50.738064 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:01:51 crc kubenswrapper[4687]: I0314 10:01:51.737536 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:01:51 crc kubenswrapper[4687]: E0314 10:01:51.738152 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:01:57 crc kubenswrapper[4687]: I0314 10:01:57.802665 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:01:57 crc kubenswrapper[4687]: E0314 10:01:57.803451 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.149418 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558042-gbvx5"] Mar 14 10:02:00 crc kubenswrapper[4687]: E0314 10:02:00.150548 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94953a0f-cefa-459d-9beb-b73414626765" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.150576 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="94953a0f-cefa-459d-9beb-b73414626765" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.150809 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="94953a0f-cefa-459d-9beb-b73414626765" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.151605 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.154266 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.154564 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.154736 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.158649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-gbvx5"] Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.351863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj\") pod \"auto-csr-approver-29558042-gbvx5\" (UID: \"91af21a4-1ca4-4412-b724-e3d71d50d1f1\") " pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.454452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj\") pod \"auto-csr-approver-29558042-gbvx5\" (UID: \"91af21a4-1ca4-4412-b724-e3d71d50d1f1\") " pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.484959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj\") pod \"auto-csr-approver-29558042-gbvx5\" (UID: \"91af21a4-1ca4-4412-b724-e3d71d50d1f1\") " pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:00 crc kubenswrapper[4687]: I0314 10:02:00.782762 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:01 crc kubenswrapper[4687]: I0314 10:02:01.335013 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-gbvx5"] Mar 14 10:02:01 crc kubenswrapper[4687]: I0314 10:02:01.932018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" event={"ID":"91af21a4-1ca4-4412-b724-e3d71d50d1f1","Type":"ContainerStarted","Data":"62c44a44ba234734795b64c9cba5e160af020c719aa44b194a003a6f3189f9d7"} Mar 14 10:02:02 crc kubenswrapper[4687]: I0314 10:02:02.737909 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:02:02 crc kubenswrapper[4687]: E0314 10:02:02.738870 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:02:02 crc kubenswrapper[4687]: I0314 10:02:02.943986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" event={"ID":"91af21a4-1ca4-4412-b724-e3d71d50d1f1","Type":"ContainerStarted","Data":"49e592fc37fd4956c0cc70ce15aad2969be6dfeff2dcee2b8932255dd81e0314"} Mar 14 10:02:03 crc kubenswrapper[4687]: I0314 10:02:03.953447 4687 generic.go:334] "Generic (PLEG): container finished" podID="91af21a4-1ca4-4412-b724-e3d71d50d1f1" containerID="49e592fc37fd4956c0cc70ce15aad2969be6dfeff2dcee2b8932255dd81e0314" exitCode=0 Mar 14 10:02:03 crc kubenswrapper[4687]: I0314 10:02:03.953558 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" event={"ID":"91af21a4-1ca4-4412-b724-e3d71d50d1f1","Type":"ContainerDied","Data":"49e592fc37fd4956c0cc70ce15aad2969be6dfeff2dcee2b8932255dd81e0314"} Mar 14 10:02:04 crc kubenswrapper[4687]: I0314 10:02:04.736925 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:02:04 crc kubenswrapper[4687]: E0314 10:02:04.737266 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.332152 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.456373 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj\") pod \"91af21a4-1ca4-4412-b724-e3d71d50d1f1\" (UID: \"91af21a4-1ca4-4412-b724-e3d71d50d1f1\") " Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.463685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj" (OuterVolumeSpecName: "kube-api-access-vm5bj") pod "91af21a4-1ca4-4412-b724-e3d71d50d1f1" (UID: "91af21a4-1ca4-4412-b724-e3d71d50d1f1"). InnerVolumeSpecName "kube-api-access-vm5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.560070 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/91af21a4-1ca4-4412-b724-e3d71d50d1f1-kube-api-access-vm5bj\") on node \"crc\" DevicePath \"\"" Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.971704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" event={"ID":"91af21a4-1ca4-4412-b724-e3d71d50d1f1","Type":"ContainerDied","Data":"62c44a44ba234734795b64c9cba5e160af020c719aa44b194a003a6f3189f9d7"} Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.971751 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c44a44ba234734795b64c9cba5e160af020c719aa44b194a003a6f3189f9d7" Mar 14 10:02:05 crc kubenswrapper[4687]: I0314 10:02:05.971813 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-gbvx5" Mar 14 10:02:06 crc kubenswrapper[4687]: I0314 10:02:06.022438 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-7fp4d"] Mar 14 10:02:06 crc kubenswrapper[4687]: I0314 10:02:06.029512 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-7fp4d"] Mar 14 10:02:07 crc kubenswrapper[4687]: I0314 10:02:07.747231 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ef0181-7084-48b9-8589-e19b2cd4498d" path="/var/lib/kubelet/pods/62ef0181-7084-48b9-8589-e19b2cd4498d/volumes" Mar 14 10:02:12 crc kubenswrapper[4687]: I0314 10:02:12.737923 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:02:12 crc kubenswrapper[4687]: E0314 10:02:12.738902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:02:14 crc kubenswrapper[4687]: I0314 10:02:14.736568 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:02:14 crc kubenswrapper[4687]: E0314 10:02:14.737126 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:02:19 crc kubenswrapper[4687]: I0314 10:02:19.736743 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:02:19 crc kubenswrapper[4687]: E0314 10:02:19.737588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:02:25 crc kubenswrapper[4687]: I0314 10:02:25.743992 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:02:25 crc kubenswrapper[4687]: E0314 10:02:25.745756 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:02:26 crc kubenswrapper[4687]: I0314 10:02:26.737045 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:02:26 crc kubenswrapper[4687]: E0314 10:02:26.737715 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:02:30 crc kubenswrapper[4687]: I0314 10:02:30.764001 4687 scope.go:117] "RemoveContainer" containerID="15ba539d79e7e4513e675ab8a8fdf1fb75e7a408abfd0d342dbba978b6cd7239" Mar 14 10:02:34 crc kubenswrapper[4687]: I0314 10:02:34.738028 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:02:34 crc kubenswrapper[4687]: E0314 10:02:34.740277 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:02:39 crc kubenswrapper[4687]: I0314 10:02:39.736735 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:02:39 crc kubenswrapper[4687]: E0314 10:02:39.737456 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:02:40 crc kubenswrapper[4687]: I0314 10:02:40.737646 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:02:40 crc kubenswrapper[4687]: E0314 10:02:40.737981 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:02:45 crc kubenswrapper[4687]: I0314 10:02:45.750062 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:02:46 crc kubenswrapper[4687]: I0314 10:02:46.370371 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c"} Mar 14 10:02:51 crc kubenswrapper[4687]: I0314 10:02:51.737268 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:02:51 crc kubenswrapper[4687]: E0314 10:02:51.737951 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:02:52 crc kubenswrapper[4687]: I0314 10:02:52.220665 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:02:52 crc kubenswrapper[4687]: I0314 10:02:52.221012 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:02:54 crc kubenswrapper[4687]: I0314 10:02:54.442974 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" exitCode=1 Mar 14 10:02:54 crc kubenswrapper[4687]: I0314 10:02:54.443543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c"} Mar 14 10:02:54 crc kubenswrapper[4687]: I0314 10:02:54.443595 4687 scope.go:117] "RemoveContainer" containerID="e0fc659453c5cf7695838b6ee85cd5709f11eaab38ba10b755b4ebb9a10393a7" Mar 14 10:02:54 crc kubenswrapper[4687]: I0314 10:02:54.444410 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:02:54 crc kubenswrapper[4687]: E0314 10:02:54.444676 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:02:54 crc kubenswrapper[4687]: I0314 10:02:54.737371 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:02:54 crc kubenswrapper[4687]: E0314 10:02:54.737831 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:03:02 crc kubenswrapper[4687]: I0314 10:03:02.219955 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:03:02 crc kubenswrapper[4687]: I0314 10:03:02.220627 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:03:02 crc kubenswrapper[4687]: I0314 10:03:02.221659 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:03:02 crc kubenswrapper[4687]: E0314 10:03:02.222016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:03:03 crc kubenswrapper[4687]: I0314 10:03:03.737102 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:03:03 crc kubenswrapper[4687]: E0314 10:03:03.738273 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:03:05 crc kubenswrapper[4687]: I0314 10:03:05.743288 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:03:06 crc kubenswrapper[4687]: I0314 10:03:06.567865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7"} Mar 14 10:03:12 crc kubenswrapper[4687]: I0314 10:03:12.128539 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:03:12 crc kubenswrapper[4687]: I0314 10:03:12.129047 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:03:14 crc kubenswrapper[4687]: I0314 10:03:14.671819 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" exitCode=1 Mar 14 10:03:14 crc kubenswrapper[4687]: I0314 10:03:14.671954 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7"} Mar 14 10:03:14 crc kubenswrapper[4687]: I0314 10:03:14.673021 4687 scope.go:117] "RemoveContainer" containerID="9dc09c8d815b6f4aabf2d58e3daec9fa6fc2554da2a1e552387c7250fca0dedb" Mar 14 10:03:14 crc kubenswrapper[4687]: I0314 10:03:14.674104 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:03:14 crc kubenswrapper[4687]: E0314 10:03:14.674718 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:03:15 crc kubenswrapper[4687]: I0314 10:03:15.743369 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:03:15 crc kubenswrapper[4687]: E0314 10:03:15.743660 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:03:16 crc kubenswrapper[4687]: I0314 10:03:16.737199 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:03:16 crc kubenswrapper[4687]: E0314 10:03:16.737940 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:03:22 crc kubenswrapper[4687]: I0314 10:03:22.128572 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:03:22 crc kubenswrapper[4687]: I0314 10:03:22.129059 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:03:22 crc kubenswrapper[4687]: I0314 10:03:22.129774 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:03:22 crc kubenswrapper[4687]: E0314 10:03:22.129984 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:03:28 crc kubenswrapper[4687]: I0314 10:03:28.737037 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:03:29 crc kubenswrapper[4687]: I0314 10:03:29.737196 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:03:29 crc kubenswrapper[4687]: E0314 10:03:29.739060 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:03:29 crc kubenswrapper[4687]: I0314 10:03:29.813686 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9"} Mar 14 10:03:32 crc kubenswrapper[4687]: I0314 10:03:32.737587 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:03:32 crc kubenswrapper[4687]: E0314 10:03:32.738103 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:03:42 crc kubenswrapper[4687]: I0314 10:03:42.737681 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:03:42 crc kubenswrapper[4687]: E0314 10:03:42.738620 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:03:46 crc kubenswrapper[4687]: I0314 10:03:46.738146 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:03:46 crc kubenswrapper[4687]: E0314 10:03:46.739470 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:03:54 crc kubenswrapper[4687]: I0314 10:03:54.736966 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:03:54 crc kubenswrapper[4687]: E0314 10:03:54.737641 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:03:58 crc kubenswrapper[4687]: I0314 10:03:58.736873 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:03:58 crc kubenswrapper[4687]: E0314 10:03:58.737488 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.153317 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558044-7zzvs"] Mar 14 10:04:00 crc kubenswrapper[4687]: E0314 10:04:00.154118 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91af21a4-1ca4-4412-b724-e3d71d50d1f1" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.154132 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="91af21a4-1ca4-4412-b724-e3d71d50d1f1" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.154324 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="91af21a4-1ca4-4412-b724-e3d71d50d1f1" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.155090 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.158147 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.158191 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.158251 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.164611 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-7zzvs"] Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.205660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kg2p\" (UniqueName: \"kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p\") pod \"auto-csr-approver-29558044-7zzvs\" (UID: \"622c66fc-face-46ca-bcfc-12f45e038261\") " pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.307585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kg2p\" (UniqueName: \"kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p\") pod \"auto-csr-approver-29558044-7zzvs\" (UID: \"622c66fc-face-46ca-bcfc-12f45e038261\") " pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.326557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kg2p\" (UniqueName: \"kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p\") pod \"auto-csr-approver-29558044-7zzvs\" (UID: \"622c66fc-face-46ca-bcfc-12f45e038261\") " pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.476179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.923315 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-7zzvs"] Mar 14 10:04:00 crc kubenswrapper[4687]: I0314 10:04:00.932798 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:04:01 crc kubenswrapper[4687]: I0314 10:04:01.099658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" event={"ID":"622c66fc-face-46ca-bcfc-12f45e038261","Type":"ContainerStarted","Data":"5d64d06307b36791d98a7e5c2cecce340c2b018b6edc66fc68ed86e947e08685"} Mar 14 10:04:03 crc kubenswrapper[4687]: I0314 10:04:03.120055 4687 generic.go:334] "Generic (PLEG): container finished" podID="622c66fc-face-46ca-bcfc-12f45e038261" containerID="7ca9d91e1e6e9463a4178d9ffa1094e3c1d16152a5cb14b8773c68f6a3ed6552" exitCode=0 Mar 14 10:04:03 crc kubenswrapper[4687]: I0314 10:04:03.120109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" event={"ID":"622c66fc-face-46ca-bcfc-12f45e038261","Type":"ContainerDied","Data":"7ca9d91e1e6e9463a4178d9ffa1094e3c1d16152a5cb14b8773c68f6a3ed6552"} Mar 14 10:04:04 crc kubenswrapper[4687]: I0314 10:04:04.528064 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:04 crc kubenswrapper[4687]: I0314 10:04:04.601430 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kg2p\" (UniqueName: \"kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p\") pod \"622c66fc-face-46ca-bcfc-12f45e038261\" (UID: \"622c66fc-face-46ca-bcfc-12f45e038261\") " Mar 14 10:04:04 crc kubenswrapper[4687]: I0314 10:04:04.608225 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p" (OuterVolumeSpecName: "kube-api-access-2kg2p") pod "622c66fc-face-46ca-bcfc-12f45e038261" (UID: "622c66fc-face-46ca-bcfc-12f45e038261"). InnerVolumeSpecName "kube-api-access-2kg2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:04:04 crc kubenswrapper[4687]: I0314 10:04:04.704409 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kg2p\" (UniqueName: \"kubernetes.io/projected/622c66fc-face-46ca-bcfc-12f45e038261-kube-api-access-2kg2p\") on node \"crc\" DevicePath \"\"" Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.149289 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" event={"ID":"622c66fc-face-46ca-bcfc-12f45e038261","Type":"ContainerDied","Data":"5d64d06307b36791d98a7e5c2cecce340c2b018b6edc66fc68ed86e947e08685"} Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.149619 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d64d06307b36791d98a7e5c2cecce340c2b018b6edc66fc68ed86e947e08685" Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.149675 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-7zzvs" Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.614692 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-9h8jr"] Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.624491 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-9h8jr"] Mar 14 10:04:05 crc kubenswrapper[4687]: I0314 10:04:05.753121 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e324e3d-a37d-45c3-a733-609f5503aea9" path="/var/lib/kubelet/pods/2e324e3d-a37d-45c3-a733-609f5503aea9/volumes" Mar 14 10:04:08 crc kubenswrapper[4687]: I0314 10:04:08.738035 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:04:08 crc kubenswrapper[4687]: E0314 10:04:08.738828 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:04:12 crc kubenswrapper[4687]: I0314 10:04:12.737294 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:04:12 crc kubenswrapper[4687]: E0314 10:04:12.738061 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:04:19 crc kubenswrapper[4687]: I0314 10:04:19.736682 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:04:19 crc kubenswrapper[4687]: E0314 10:04:19.737483 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:04:25 crc kubenswrapper[4687]: I0314 10:04:25.742816 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:04:25 crc kubenswrapper[4687]: E0314 10:04:25.743572 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:04:30 crc kubenswrapper[4687]: I0314 10:04:30.850127 4687 scope.go:117] "RemoveContainer" containerID="cccddcf1dc027c97429f1a9335c02824c57e0cc87e1047cef7d0ca0bfe5c51ca" Mar 14 10:04:31 crc kubenswrapper[4687]: I0314 10:04:31.737850 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:04:31 crc kubenswrapper[4687]: E0314 10:04:31.738555 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:04:36 crc kubenswrapper[4687]: I0314 10:04:36.737437 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:04:36 crc kubenswrapper[4687]: E0314 10:04:36.738167 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.038145 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:37 crc kubenswrapper[4687]: E0314 10:04:37.038668 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622c66fc-face-46ca-bcfc-12f45e038261" containerName="oc" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.038691 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="622c66fc-face-46ca-bcfc-12f45e038261" containerName="oc" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.038891 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="622c66fc-face-46ca-bcfc-12f45e038261" containerName="oc" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.040399 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.050924 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.092310 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.092401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.092518 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzjj\" (UniqueName: \"kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.194107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzjj\" (UniqueName: \"kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.194225 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.194292 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.194769 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.194840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.218093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzjj\" (UniqueName: \"kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj\") pod \"certified-operators-9fqbj\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.368000 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:37 crc kubenswrapper[4687]: W0314 10:04:37.926832 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8898fefa_9832_43f9_b581_3e3f2b3fefcd.slice/crio-cd28fc51d0862f36f8b57b2436379cc0d2cf67647b17091210289b13d2ae8a2d WatchSource:0}: Error finding container cd28fc51d0862f36f8b57b2436379cc0d2cf67647b17091210289b13d2ae8a2d: Status 404 returned error can't find the container with id cd28fc51d0862f36f8b57b2436379cc0d2cf67647b17091210289b13d2ae8a2d Mar 14 10:04:37 crc kubenswrapper[4687]: I0314 10:04:37.932014 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:38 crc kubenswrapper[4687]: I0314 10:04:38.463786 4687 generic.go:334] "Generic (PLEG): container finished" podID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerID="d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971" exitCode=0 Mar 14 10:04:38 crc kubenswrapper[4687]: I0314 10:04:38.463864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerDied","Data":"d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971"} Mar 14 10:04:38 crc kubenswrapper[4687]: I0314 10:04:38.464094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerStarted","Data":"cd28fc51d0862f36f8b57b2436379cc0d2cf67647b17091210289b13d2ae8a2d"} Mar 14 10:04:39 crc kubenswrapper[4687]: I0314 10:04:39.476551 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerStarted","Data":"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8"} Mar 14 10:04:41 crc kubenswrapper[4687]: I0314 10:04:41.494966 4687 generic.go:334] "Generic (PLEG): container finished" podID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerID="40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8" exitCode=0 Mar 14 10:04:41 crc kubenswrapper[4687]: I0314 10:04:41.495049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerDied","Data":"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8"} Mar 14 10:04:42 crc kubenswrapper[4687]: I0314 10:04:42.514929 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerStarted","Data":"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317"} Mar 14 10:04:42 crc kubenswrapper[4687]: I0314 10:04:42.540864 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9fqbj" podStartSLOduration=2.075932806 podStartE2EDuration="5.540845762s" podCreationTimestamp="2026-03-14 10:04:37 +0000 UTC" firstStartedPulling="2026-03-14 10:04:38.465293875 +0000 UTC m=+4063.453534250" lastFinishedPulling="2026-03-14 10:04:41.930206791 +0000 UTC m=+4066.918447206" observedRunningTime="2026-03-14 10:04:42.531441521 +0000 UTC m=+4067.519681896" watchObservedRunningTime="2026-03-14 10:04:42.540845762 +0000 UTC m=+4067.529086147" Mar 14 10:04:45 crc kubenswrapper[4687]: I0314 10:04:45.749276 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:04:45 crc kubenswrapper[4687]: E0314 10:04:45.749838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:04:47 crc kubenswrapper[4687]: I0314 10:04:47.368904 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:47 crc kubenswrapper[4687]: I0314 10:04:47.369156 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:47 crc kubenswrapper[4687]: I0314 10:04:47.417389 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:47 crc kubenswrapper[4687]: I0314 10:04:47.604240 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:47 crc kubenswrapper[4687]: I0314 10:04:47.658801 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:48 crc kubenswrapper[4687]: I0314 10:04:48.738213 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:04:48 crc kubenswrapper[4687]: E0314 10:04:48.738766 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:04:49 crc kubenswrapper[4687]: I0314 10:04:49.574979 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9fqbj" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="registry-server" containerID="cri-o://29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317" gracePeriod=2 Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.370733 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.463681 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjzjj\" (UniqueName: \"kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj\") pod \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.463832 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content\") pod \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.463939 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities\") pod \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\" (UID: \"8898fefa-9832-43f9-b581-3e3f2b3fefcd\") " Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.464841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities" (OuterVolumeSpecName: "utilities") pod "8898fefa-9832-43f9-b581-3e3f2b3fefcd" (UID: "8898fefa-9832-43f9-b581-3e3f2b3fefcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.469322 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj" (OuterVolumeSpecName: "kube-api-access-bjzjj") pod "8898fefa-9832-43f9-b581-3e3f2b3fefcd" (UID: "8898fefa-9832-43f9-b581-3e3f2b3fefcd"). InnerVolumeSpecName "kube-api-access-bjzjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.530130 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8898fefa-9832-43f9-b581-3e3f2b3fefcd" (UID: "8898fefa-9832-43f9-b581-3e3f2b3fefcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.565900 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjzjj\" (UniqueName: \"kubernetes.io/projected/8898fefa-9832-43f9-b581-3e3f2b3fefcd-kube-api-access-bjzjj\") on node \"crc\" DevicePath \"\"" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.565948 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.565958 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8898fefa-9832-43f9-b581-3e3f2b3fefcd-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.584916 4687 generic.go:334] "Generic (PLEG): container finished" podID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerID="29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317" exitCode=0 Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.584963 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerDied","Data":"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317"} Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.585008 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fqbj" event={"ID":"8898fefa-9832-43f9-b581-3e3f2b3fefcd","Type":"ContainerDied","Data":"cd28fc51d0862f36f8b57b2436379cc0d2cf67647b17091210289b13d2ae8a2d"} Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.585032 4687 scope.go:117] "RemoveContainer" containerID="29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.585196 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fqbj" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.611812 4687 scope.go:117] "RemoveContainer" containerID="40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.631440 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.654616 4687 scope.go:117] "RemoveContainer" containerID="d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.660561 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9fqbj"] Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.687580 4687 scope.go:117] "RemoveContainer" containerID="29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317" Mar 14 10:04:50 crc kubenswrapper[4687]: E0314 10:04:50.688093 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317\": container with ID starting with 29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317 not found: ID does not exist" containerID="29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.688134 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317"} err="failed to get container status \"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317\": rpc error: code = NotFound desc = could not find container \"29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317\": container with ID starting with 29f214fec7cc8d6e3b1d59dd79345c4c650f14e6ab4ba63476ca0e0af3f69317 not found: ID does not exist" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.688158 4687 scope.go:117] "RemoveContainer" containerID="40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8" Mar 14 10:04:50 crc kubenswrapper[4687]: E0314 10:04:50.688479 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8\": container with ID starting with 40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8 not found: ID does not exist" containerID="40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.688783 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8"} err="failed to get container status \"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8\": rpc error: code = NotFound desc = could not find container \"40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8\": container with ID starting with 40129f5f97cab0a8429522a648a16718c7a4b4984551a25c85ade51e3c7babf8 not found: ID does not exist" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.688804 4687 scope.go:117] "RemoveContainer" containerID="d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971" Mar 14 10:04:50 crc kubenswrapper[4687]: E0314 10:04:50.689142 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971\": container with ID starting with d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971 not found: ID does not exist" containerID="d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971" Mar 14 10:04:50 crc kubenswrapper[4687]: I0314 10:04:50.689171 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971"} err="failed to get container status \"d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971\": rpc error: code = NotFound desc = could not find container \"d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971\": container with ID starting with d7ab2c1e5fd0ddf191f8d1fd44dbeb1c77e8c93c560cb41ce26fc99988e18971 not found: ID does not exist" Mar 14 10:04:51 crc kubenswrapper[4687]: I0314 10:04:51.749174 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" path="/var/lib/kubelet/pods/8898fefa-9832-43f9-b581-3e3f2b3fefcd/volumes" Mar 14 10:05:00 crc kubenswrapper[4687]: I0314 10:05:00.736963 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:05:00 crc kubenswrapper[4687]: E0314 10:05:00.737690 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:05:00 crc kubenswrapper[4687]: I0314 10:05:00.738187 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:05:00 crc kubenswrapper[4687]: E0314 10:05:00.738407 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:05:11 crc kubenswrapper[4687]: I0314 10:05:11.737696 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:05:11 crc kubenswrapper[4687]: E0314 10:05:11.738364 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:05:12 crc kubenswrapper[4687]: I0314 10:05:12.738061 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:05:12 crc kubenswrapper[4687]: E0314 10:05:12.738470 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:05:24 crc kubenswrapper[4687]: I0314 10:05:24.737922 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:05:24 crc kubenswrapper[4687]: E0314 10:05:24.738614 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:05:26 crc kubenswrapper[4687]: I0314 10:05:26.737322 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:05:26 crc kubenswrapper[4687]: E0314 10:05:26.737944 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:05:36 crc kubenswrapper[4687]: I0314 10:05:36.737948 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:05:36 crc kubenswrapper[4687]: E0314 10:05:36.738876 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:05:41 crc kubenswrapper[4687]: I0314 10:05:41.737186 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:05:41 crc kubenswrapper[4687]: E0314 10:05:41.738038 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:05:47 crc kubenswrapper[4687]: I0314 10:05:47.738779 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:05:47 crc kubenswrapper[4687]: E0314 10:05:47.740638 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:05:54 crc kubenswrapper[4687]: I0314 10:05:54.111173 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:05:54 crc kubenswrapper[4687]: I0314 10:05:54.111737 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:05:56 crc kubenswrapper[4687]: I0314 10:05:56.738411 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:05:56 crc kubenswrapper[4687]: E0314 10:05:56.739234 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.138808 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558046-4nb2g"] Mar 14 10:06:00 crc kubenswrapper[4687]: E0314 10:06:00.141996 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="extract-utilities" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.142028 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="extract-utilities" Mar 14 10:06:00 crc kubenswrapper[4687]: E0314 10:06:00.142063 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="registry-server" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.142072 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="registry-server" Mar 14 10:06:00 crc kubenswrapper[4687]: E0314 10:06:00.142081 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="extract-content" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.142090 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="extract-content" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.142434 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8898fefa-9832-43f9-b581-3e3f2b3fefcd" containerName="registry-server" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.143215 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.145343 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.145466 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.145647 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.159894 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-4nb2g"] Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.299932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd8j\" (UniqueName: \"kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j\") pod \"auto-csr-approver-29558046-4nb2g\" (UID: \"74aa8607-7870-47c0-baa4-eb2df06d999c\") " pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.401562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkd8j\" (UniqueName: \"kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j\") pod \"auto-csr-approver-29558046-4nb2g\" (UID: \"74aa8607-7870-47c0-baa4-eb2df06d999c\") " pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.419846 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkd8j\" (UniqueName: \"kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j\") pod \"auto-csr-approver-29558046-4nb2g\" (UID: \"74aa8607-7870-47c0-baa4-eb2df06d999c\") " pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.473808 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.737187 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:06:00 crc kubenswrapper[4687]: E0314 10:06:00.737567 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:06:00 crc kubenswrapper[4687]: I0314 10:06:00.935931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-4nb2g"] Mar 14 10:06:01 crc kubenswrapper[4687]: I0314 10:06:01.360973 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" event={"ID":"74aa8607-7870-47c0-baa4-eb2df06d999c","Type":"ContainerStarted","Data":"6445103cb678214dee737715b79086ce1dbab3f90e02307eaca07111b9b3ae85"} Mar 14 10:06:03 crc kubenswrapper[4687]: I0314 10:06:03.378113 4687 generic.go:334] "Generic (PLEG): container finished" podID="74aa8607-7870-47c0-baa4-eb2df06d999c" containerID="d177d1a29b21559694665b14eae2a78160120782da0767dc9d84238c93d3355d" exitCode=0 Mar 14 10:06:03 crc kubenswrapper[4687]: I0314 10:06:03.378622 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" event={"ID":"74aa8607-7870-47c0-baa4-eb2df06d999c","Type":"ContainerDied","Data":"d177d1a29b21559694665b14eae2a78160120782da0767dc9d84238c93d3355d"} Mar 14 10:06:04 crc kubenswrapper[4687]: I0314 10:06:04.856063 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.048936 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkd8j\" (UniqueName: \"kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j\") pod \"74aa8607-7870-47c0-baa4-eb2df06d999c\" (UID: \"74aa8607-7870-47c0-baa4-eb2df06d999c\") " Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.068451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j" (OuterVolumeSpecName: "kube-api-access-zkd8j") pod "74aa8607-7870-47c0-baa4-eb2df06d999c" (UID: "74aa8607-7870-47c0-baa4-eb2df06d999c"). InnerVolumeSpecName "kube-api-access-zkd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.151656 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkd8j\" (UniqueName: \"kubernetes.io/projected/74aa8607-7870-47c0-baa4-eb2df06d999c-kube-api-access-zkd8j\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.397946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" event={"ID":"74aa8607-7870-47c0-baa4-eb2df06d999c","Type":"ContainerDied","Data":"6445103cb678214dee737715b79086ce1dbab3f90e02307eaca07111b9b3ae85"} Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.398213 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6445103cb678214dee737715b79086ce1dbab3f90e02307eaca07111b9b3ae85" Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.397982 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-4nb2g" Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.921167 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-gr56b"] Mar 14 10:06:05 crc kubenswrapper[4687]: I0314 10:06:05.929200 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-gr56b"] Mar 14 10:06:07 crc kubenswrapper[4687]: I0314 10:06:07.737707 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:06:07 crc kubenswrapper[4687]: E0314 10:06:07.738214 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:06:07 crc kubenswrapper[4687]: I0314 10:06:07.748807 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb524d62-fef9-4b18-a14e-e8558e70a40c" path="/var/lib/kubelet/pods/fb524d62-fef9-4b18-a14e-e8558e70a40c/volumes" Mar 14 10:06:15 crc kubenswrapper[4687]: I0314 10:06:15.771629 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:06:15 crc kubenswrapper[4687]: E0314 10:06:15.772627 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:06:22 crc kubenswrapper[4687]: I0314 10:06:22.737037 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:06:22 crc kubenswrapper[4687]: E0314 10:06:22.737686 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:06:24 crc kubenswrapper[4687]: I0314 10:06:24.110905 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:06:24 crc kubenswrapper[4687]: I0314 10:06:24.111241 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:06:29 crc kubenswrapper[4687]: I0314 10:06:29.737226 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:06:29 crc kubenswrapper[4687]: E0314 10:06:29.738029 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:06:31 crc kubenswrapper[4687]: I0314 10:06:31.021266 4687 scope.go:117] "RemoveContainer" containerID="8121fe1a889a33bc54739366bacfe185e00b9b480097170686d06f9512bee6fa" Mar 14 10:06:33 crc kubenswrapper[4687]: I0314 10:06:33.737368 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:06:33 crc kubenswrapper[4687]: E0314 10:06:33.737880 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:06:40 crc kubenswrapper[4687]: I0314 10:06:40.736809 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:06:40 crc kubenswrapper[4687]: E0314 10:06:40.737586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:06:44 crc kubenswrapper[4687]: I0314 10:06:44.737756 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:06:44 crc kubenswrapper[4687]: E0314 10:06:44.738592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:06:52 crc kubenswrapper[4687]: I0314 10:06:52.737721 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:06:52 crc kubenswrapper[4687]: E0314 10:06:52.739282 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.111586 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.111911 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.111945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.112718 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.112805 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9" gracePeriod=600 Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.849871 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9" exitCode=0 Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.849911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9"} Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.850403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c"} Mar 14 10:06:54 crc kubenswrapper[4687]: I0314 10:06:54.850431 4687 scope.go:117] "RemoveContainer" containerID="26b7c3a8ce55773413afd65f8f294996e9d86911d205dbb1311cb85bf400f5b5" Mar 14 10:06:59 crc kubenswrapper[4687]: I0314 10:06:59.737950 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:06:59 crc kubenswrapper[4687]: E0314 10:06:59.738900 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:07:07 crc kubenswrapper[4687]: I0314 10:07:07.737000 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:07:07 crc kubenswrapper[4687]: E0314 10:07:07.737726 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:07:13 crc kubenswrapper[4687]: I0314 10:07:13.737652 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:07:13 crc kubenswrapper[4687]: E0314 10:07:13.738273 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:07:20 crc kubenswrapper[4687]: I0314 10:07:20.742757 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:07:20 crc kubenswrapper[4687]: E0314 10:07:20.744272 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:07:26 crc kubenswrapper[4687]: I0314 10:07:26.737579 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:07:26 crc kubenswrapper[4687]: E0314 10:07:26.738152 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:07:33 crc kubenswrapper[4687]: I0314 10:07:33.738000 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:07:33 crc kubenswrapper[4687]: E0314 10:07:33.738696 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:07:39 crc kubenswrapper[4687]: I0314 10:07:39.736688 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:07:39 crc kubenswrapper[4687]: E0314 10:07:39.737418 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:07:48 crc kubenswrapper[4687]: I0314 10:07:48.737432 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:07:48 crc kubenswrapper[4687]: E0314 10:07:48.739642 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:07:52 crc kubenswrapper[4687]: I0314 10:07:52.736693 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:07:52 crc kubenswrapper[4687]: E0314 10:07:52.737196 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.141915 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558048-czrm4"] Mar 14 10:08:00 crc kubenswrapper[4687]: E0314 10:08:00.142922 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74aa8607-7870-47c0-baa4-eb2df06d999c" containerName="oc" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.142937 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="74aa8607-7870-47c0-baa4-eb2df06d999c" containerName="oc" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.143181 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="74aa8607-7870-47c0-baa4-eb2df06d999c" containerName="oc" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.144055 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.146502 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.146686 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.146981 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.151545 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-czrm4"] Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.284713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsdcp\" (UniqueName: \"kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp\") pod \"auto-csr-approver-29558048-czrm4\" (UID: \"cd607b60-570e-4d98-bec8-3deffde8123a\") " pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.386870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsdcp\" (UniqueName: \"kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp\") pod \"auto-csr-approver-29558048-czrm4\" (UID: \"cd607b60-570e-4d98-bec8-3deffde8123a\") " pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.410166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsdcp\" (UniqueName: \"kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp\") pod \"auto-csr-approver-29558048-czrm4\" (UID: \"cd607b60-570e-4d98-bec8-3deffde8123a\") " pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.464303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:00 crc kubenswrapper[4687]: I0314 10:08:00.879368 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-czrm4"] Mar 14 10:08:01 crc kubenswrapper[4687]: I0314 10:08:01.475736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-czrm4" event={"ID":"cd607b60-570e-4d98-bec8-3deffde8123a","Type":"ContainerStarted","Data":"7265a03f86dd3eee78c36c478418271cbf7776f16c9f4c4a8b9ddae3440f7fa8"} Mar 14 10:08:02 crc kubenswrapper[4687]: I0314 10:08:02.490419 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd607b60-570e-4d98-bec8-3deffde8123a" containerID="a34b0e4edc3f29923b426c859e25dea860f7dd1d11e1932218163ae343d03886" exitCode=0 Mar 14 10:08:02 crc kubenswrapper[4687]: I0314 10:08:02.490525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-czrm4" event={"ID":"cd607b60-570e-4d98-bec8-3deffde8123a","Type":"ContainerDied","Data":"a34b0e4edc3f29923b426c859e25dea860f7dd1d11e1932218163ae343d03886"} Mar 14 10:08:03 crc kubenswrapper[4687]: I0314 10:08:03.737025 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.190162 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.376755 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsdcp\" (UniqueName: \"kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp\") pod \"cd607b60-570e-4d98-bec8-3deffde8123a\" (UID: \"cd607b60-570e-4d98-bec8-3deffde8123a\") " Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.460440 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp" (OuterVolumeSpecName: "kube-api-access-tsdcp") pod "cd607b60-570e-4d98-bec8-3deffde8123a" (UID: "cd607b60-570e-4d98-bec8-3deffde8123a"). InnerVolumeSpecName "kube-api-access-tsdcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.479309 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsdcp\" (UniqueName: \"kubernetes.io/projected/cd607b60-570e-4d98-bec8-3deffde8123a-kube-api-access-tsdcp\") on node \"crc\" DevicePath \"\"" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.510116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e"} Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.511982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-czrm4" event={"ID":"cd607b60-570e-4d98-bec8-3deffde8123a","Type":"ContainerDied","Data":"7265a03f86dd3eee78c36c478418271cbf7776f16c9f4c4a8b9ddae3440f7fa8"} Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.512028 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7265a03f86dd3eee78c36c478418271cbf7776f16c9f4c4a8b9ddae3440f7fa8" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.512089 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-czrm4" Mar 14 10:08:04 crc kubenswrapper[4687]: I0314 10:08:04.737587 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:08:04 crc kubenswrapper[4687]: E0314 10:08:04.737860 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:05 crc kubenswrapper[4687]: I0314 10:08:05.274947 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-gbvx5"] Mar 14 10:08:05 crc kubenswrapper[4687]: I0314 10:08:05.283417 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-gbvx5"] Mar 14 10:08:05 crc kubenswrapper[4687]: I0314 10:08:05.757765 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91af21a4-1ca4-4412-b724-e3d71d50d1f1" path="/var/lib/kubelet/pods/91af21a4-1ca4-4412-b724-e3d71d50d1f1/volumes" Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.220053 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.220609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.584509 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" exitCode=1 Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.584548 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e"} Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.584596 4687 scope.go:117] "RemoveContainer" containerID="7009650d4a7fdce29ffdcca4ffa2f5d0936387da4ab93050d714797b4a4c951c" Mar 14 10:08:12 crc kubenswrapper[4687]: I0314 10:08:12.585495 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:08:12 crc kubenswrapper[4687]: E0314 10:08:12.585931 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:08:19 crc kubenswrapper[4687]: I0314 10:08:19.737403 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:08:20 crc kubenswrapper[4687]: I0314 10:08:20.677990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7"} Mar 14 10:08:22 crc kubenswrapper[4687]: I0314 10:08:22.128533 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:08:22 crc kubenswrapper[4687]: I0314 10:08:22.128604 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:08:22 crc kubenswrapper[4687]: I0314 10:08:22.220226 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:08:22 crc kubenswrapper[4687]: I0314 10:08:22.220312 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:08:22 crc kubenswrapper[4687]: I0314 10:08:22.221486 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:08:22 crc kubenswrapper[4687]: E0314 10:08:22.221977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:08:28 crc kubenswrapper[4687]: I0314 10:08:28.773585 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" exitCode=1 Mar 14 10:08:28 crc kubenswrapper[4687]: I0314 10:08:28.773658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7"} Mar 14 10:08:28 crc kubenswrapper[4687]: I0314 10:08:28.774176 4687 scope.go:117] "RemoveContainer" containerID="dea6ca226a9f4a8b6581a47f91733077597ac7544d76bf9ebd671ec8964385f7" Mar 14 10:08:28 crc kubenswrapper[4687]: I0314 10:08:28.774975 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:08:28 crc kubenswrapper[4687]: E0314 10:08:28.775234 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:31 crc kubenswrapper[4687]: I0314 10:08:31.123966 4687 scope.go:117] "RemoveContainer" containerID="49e592fc37fd4956c0cc70ce15aad2969be6dfeff2dcee2b8932255dd81e0314" Mar 14 10:08:32 crc kubenswrapper[4687]: I0314 10:08:32.127788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:08:32 crc kubenswrapper[4687]: I0314 10:08:32.128150 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:08:32 crc kubenswrapper[4687]: I0314 10:08:32.129081 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:08:32 crc kubenswrapper[4687]: E0314 10:08:32.129415 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:35 crc kubenswrapper[4687]: I0314 10:08:35.741964 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:08:35 crc kubenswrapper[4687]: E0314 10:08:35.742593 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:08:42 crc kubenswrapper[4687]: I0314 10:08:42.737899 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:08:42 crc kubenswrapper[4687]: E0314 10:08:42.738707 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.063535 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:08:46 crc kubenswrapper[4687]: E0314 10:08:46.064929 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd607b60-570e-4d98-bec8-3deffde8123a" containerName="oc" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.064962 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd607b60-570e-4d98-bec8-3deffde8123a" containerName="oc" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.065513 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd607b60-570e-4d98-bec8-3deffde8123a" containerName="oc" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.068540 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.076933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.198889 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.198979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2plx\" (UniqueName: \"kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.199144 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.301168 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.301313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2plx\" (UniqueName: \"kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.301621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.301686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.301908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.319731 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2plx\" (UniqueName: \"kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx\") pod \"redhat-operators-rhq4b\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.395158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.879108 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:08:46 crc kubenswrapper[4687]: I0314 10:08:46.968601 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerStarted","Data":"13aaeebf8a57b17f251efb57420ce29e06262ed575755ca30085bdbb1c7d1587"} Mar 14 10:08:47 crc kubenswrapper[4687]: I0314 10:08:47.985173 4687 generic.go:334] "Generic (PLEG): container finished" podID="db53d529-8271-405b-ae7b-78b666dde5ac" containerID="a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7" exitCode=0 Mar 14 10:08:47 crc kubenswrapper[4687]: I0314 10:08:47.985224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerDied","Data":"a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7"} Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.247108 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.257892 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.268455 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.286678 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.286779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.286868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb4w\" (UniqueName: \"kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.390436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvb4w\" (UniqueName: \"kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.390688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.390934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.391869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.391947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.421067 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvb4w\" (UniqueName: \"kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w\") pod \"community-operators-4fzp5\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.612191 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:48 crc kubenswrapper[4687]: I0314 10:08:48.995175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerStarted","Data":"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9"} Mar 14 10:08:49 crc kubenswrapper[4687]: W0314 10:08:49.077448 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07b19c0d_faa8_40bb_9bf5_c23dca044fb7.slice/crio-502ca69f465eb88c24d4201f1280d1e843d3022be75b8ce64a9a8f9dfc2280b9 WatchSource:0}: Error finding container 502ca69f465eb88c24d4201f1280d1e843d3022be75b8ce64a9a8f9dfc2280b9: Status 404 returned error can't find the container with id 502ca69f465eb88c24d4201f1280d1e843d3022be75b8ce64a9a8f9dfc2280b9 Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.083248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.244432 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.247202 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.253866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.309358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.309480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.309499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2th\" (UniqueName: \"kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.412139 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.412816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.412853 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2th\" (UniqueName: \"kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.412854 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.413475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.433138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2th\" (UniqueName: \"kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th\") pod \"redhat-marketplace-6dvlq\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:49 crc kubenswrapper[4687]: I0314 10:08:49.585654 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:50 crc kubenswrapper[4687]: I0314 10:08:50.004593 4687 generic.go:334] "Generic (PLEG): container finished" podID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerID="c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f" exitCode=0 Mar 14 10:08:50 crc kubenswrapper[4687]: I0314 10:08:50.004678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerDied","Data":"c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f"} Mar 14 10:08:50 crc kubenswrapper[4687]: I0314 10:08:50.004711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerStarted","Data":"502ca69f465eb88c24d4201f1280d1e843d3022be75b8ce64a9a8f9dfc2280b9"} Mar 14 10:08:50 crc kubenswrapper[4687]: I0314 10:08:50.603755 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:08:50 crc kubenswrapper[4687]: W0314 10:08:50.614889 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb185519_0edd_4a06_9952_085ffcada74f.slice/crio-c3f2d3fd69554bb9a7f0a78d2eda2ef01e4dd91fbeb1f8dcd9db4b3923856a80 WatchSource:0}: Error finding container c3f2d3fd69554bb9a7f0a78d2eda2ef01e4dd91fbeb1f8dcd9db4b3923856a80: Status 404 returned error can't find the container with id c3f2d3fd69554bb9a7f0a78d2eda2ef01e4dd91fbeb1f8dcd9db4b3923856a80 Mar 14 10:08:50 crc kubenswrapper[4687]: I0314 10:08:50.737511 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:08:50 crc kubenswrapper[4687]: E0314 10:08:50.737780 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:08:51 crc kubenswrapper[4687]: I0314 10:08:51.014023 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerStarted","Data":"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b"} Mar 14 10:08:51 crc kubenswrapper[4687]: I0314 10:08:51.014362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerStarted","Data":"c3f2d3fd69554bb9a7f0a78d2eda2ef01e4dd91fbeb1f8dcd9db4b3923856a80"} Mar 14 10:08:52 crc kubenswrapper[4687]: I0314 10:08:52.026570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerStarted","Data":"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9"} Mar 14 10:08:52 crc kubenswrapper[4687]: I0314 10:08:52.028408 4687 generic.go:334] "Generic (PLEG): container finished" podID="db185519-0edd-4a06-9952-085ffcada74f" containerID="b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b" exitCode=0 Mar 14 10:08:52 crc kubenswrapper[4687]: I0314 10:08:52.028443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerDied","Data":"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b"} Mar 14 10:08:53 crc kubenswrapper[4687]: I0314 10:08:53.039906 4687 generic.go:334] "Generic (PLEG): container finished" podID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerID="d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9" exitCode=0 Mar 14 10:08:53 crc kubenswrapper[4687]: I0314 10:08:53.039966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerDied","Data":"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9"} Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.049357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerStarted","Data":"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06"} Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.051781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerStarted","Data":"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6"} Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.088949 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fzp5" podStartSLOduration=2.540134685 podStartE2EDuration="6.088928704s" podCreationTimestamp="2026-03-14 10:08:48 +0000 UTC" firstStartedPulling="2026-03-14 10:08:50.006052297 +0000 UTC m=+4314.994292672" lastFinishedPulling="2026-03-14 10:08:53.554846316 +0000 UTC m=+4318.543086691" observedRunningTime="2026-03-14 10:08:54.088164976 +0000 UTC m=+4319.076405361" watchObservedRunningTime="2026-03-14 10:08:54.088928704 +0000 UTC m=+4319.077169079" Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.111488 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.111752 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:08:54 crc kubenswrapper[4687]: I0314 10:08:54.737190 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:08:54 crc kubenswrapper[4687]: E0314 10:08:54.737451 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:08:55 crc kubenswrapper[4687]: I0314 10:08:55.060846 4687 generic.go:334] "Generic (PLEG): container finished" podID="db53d529-8271-405b-ae7b-78b666dde5ac" containerID="60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9" exitCode=0 Mar 14 10:08:55 crc kubenswrapper[4687]: I0314 10:08:55.060926 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerDied","Data":"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9"} Mar 14 10:08:55 crc kubenswrapper[4687]: I0314 10:08:55.063245 4687 generic.go:334] "Generic (PLEG): container finished" podID="db185519-0edd-4a06-9952-085ffcada74f" containerID="67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06" exitCode=0 Mar 14 10:08:55 crc kubenswrapper[4687]: I0314 10:08:55.063319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerDied","Data":"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06"} Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.072240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerStarted","Data":"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac"} Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.075283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerStarted","Data":"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00"} Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.099301 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rhq4b" podStartSLOduration=2.5997708619999997 podStartE2EDuration="10.099286347s" podCreationTimestamp="2026-03-14 10:08:46 +0000 UTC" firstStartedPulling="2026-03-14 10:08:47.988437045 +0000 UTC m=+4312.976677410" lastFinishedPulling="2026-03-14 10:08:55.48795252 +0000 UTC m=+4320.476192895" observedRunningTime="2026-03-14 10:08:56.094633743 +0000 UTC m=+4321.082874108" watchObservedRunningTime="2026-03-14 10:08:56.099286347 +0000 UTC m=+4321.087526722" Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.124386 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dvlq" podStartSLOduration=3.678447067 podStartE2EDuration="7.124366454s" podCreationTimestamp="2026-03-14 10:08:49 +0000 UTC" firstStartedPulling="2026-03-14 10:08:52.030666632 +0000 UTC m=+4317.018907017" lastFinishedPulling="2026-03-14 10:08:55.476585989 +0000 UTC m=+4320.464826404" observedRunningTime="2026-03-14 10:08:56.118142712 +0000 UTC m=+4321.106383097" watchObservedRunningTime="2026-03-14 10:08:56.124366454 +0000 UTC m=+4321.112606829" Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.395788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:56 crc kubenswrapper[4687]: I0314 10:08:56.395847 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:08:57 crc kubenswrapper[4687]: I0314 10:08:57.451872 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rhq4b" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" probeResult="failure" output=< Mar 14 10:08:57 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 10:08:57 crc kubenswrapper[4687]: > Mar 14 10:08:58 crc kubenswrapper[4687]: I0314 10:08:58.612700 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:58 crc kubenswrapper[4687]: I0314 10:08:58.612750 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:08:59 crc kubenswrapper[4687]: I0314 10:08:59.586557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:59 crc kubenswrapper[4687]: I0314 10:08:59.588965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:08:59 crc kubenswrapper[4687]: I0314 10:08:59.661775 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4fzp5" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="registry-server" probeResult="failure" output=< Mar 14 10:08:59 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 10:08:59 crc kubenswrapper[4687]: > Mar 14 10:09:01 crc kubenswrapper[4687]: I0314 10:09:01.410411 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6dvlq" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="registry-server" probeResult="failure" output=< Mar 14 10:09:01 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 10:09:01 crc kubenswrapper[4687]: > Mar 14 10:09:03 crc kubenswrapper[4687]: I0314 10:09:03.737591 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:09:03 crc kubenswrapper[4687]: E0314 10:09:03.738139 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:09:06 crc kubenswrapper[4687]: I0314 10:09:06.736885 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:09:06 crc kubenswrapper[4687]: E0314 10:09:06.737663 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:09:07 crc kubenswrapper[4687]: I0314 10:09:07.439597 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rhq4b" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" probeResult="failure" output=< Mar 14 10:09:07 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 10:09:07 crc kubenswrapper[4687]: > Mar 14 10:09:08 crc kubenswrapper[4687]: I0314 10:09:08.655495 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:09:08 crc kubenswrapper[4687]: I0314 10:09:08.709933 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:09:08 crc kubenswrapper[4687]: I0314 10:09:08.910594 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:09:09 crc kubenswrapper[4687]: I0314 10:09:09.636074 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:09:09 crc kubenswrapper[4687]: I0314 10:09:09.683863 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.206603 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fzp5" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="registry-server" containerID="cri-o://8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6" gracePeriod=2 Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.716471 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.806619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities\") pod \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.806883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvb4w\" (UniqueName: \"kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w\") pod \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.806929 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content\") pod \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\" (UID: \"07b19c0d-faa8-40bb-9bf5-c23dca044fb7\") " Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.807532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities" (OuterVolumeSpecName: "utilities") pod "07b19c0d-faa8-40bb-9bf5-c23dca044fb7" (UID: "07b19c0d-faa8-40bb-9bf5-c23dca044fb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.809822 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.814396 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w" (OuterVolumeSpecName: "kube-api-access-pvb4w") pod "07b19c0d-faa8-40bb-9bf5-c23dca044fb7" (UID: "07b19c0d-faa8-40bb-9bf5-c23dca044fb7"). InnerVolumeSpecName "kube-api-access-pvb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.862246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b19c0d-faa8-40bb-9bf5-c23dca044fb7" (UID: "07b19c0d-faa8-40bb-9bf5-c23dca044fb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.911456 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvb4w\" (UniqueName: \"kubernetes.io/projected/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-kube-api-access-pvb4w\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:10 crc kubenswrapper[4687]: I0314 10:09:10.911489 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b19c0d-faa8-40bb-9bf5-c23dca044fb7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.220398 4687 generic.go:334] "Generic (PLEG): container finished" podID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerID="8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6" exitCode=0 Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.220439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerDied","Data":"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6"} Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.220468 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fzp5" event={"ID":"07b19c0d-faa8-40bb-9bf5-c23dca044fb7","Type":"ContainerDied","Data":"502ca69f465eb88c24d4201f1280d1e843d3022be75b8ce64a9a8f9dfc2280b9"} Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.220487 4687 scope.go:117] "RemoveContainer" containerID="8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.220624 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fzp5" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.240421 4687 scope.go:117] "RemoveContainer" containerID="d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.262816 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.267302 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fzp5"] Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.295646 4687 scope.go:117] "RemoveContainer" containerID="c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.347221 4687 scope.go:117] "RemoveContainer" containerID="8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6" Mar 14 10:09:11 crc kubenswrapper[4687]: E0314 10:09:11.347793 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6\": container with ID starting with 8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6 not found: ID does not exist" containerID="8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.347845 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6"} err="failed to get container status \"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6\": rpc error: code = NotFound desc = could not find container \"8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6\": container with ID starting with 8391af77da3a1cb24b64993496396ae59c49195ca66d7b7385227ef987e09ae6 not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.347878 4687 scope.go:117] "RemoveContainer" containerID="d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9" Mar 14 10:09:11 crc kubenswrapper[4687]: E0314 10:09:11.348244 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9\": container with ID starting with d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9 not found: ID does not exist" containerID="d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.348275 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9"} err="failed to get container status \"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9\": rpc error: code = NotFound desc = could not find container \"d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9\": container with ID starting with d6cc51bc6e004a9a1526226c84dc030ba406f48d1ed66d7c3c7527942fcd15c9 not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.348295 4687 scope.go:117] "RemoveContainer" containerID="c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f" Mar 14 10:09:11 crc kubenswrapper[4687]: E0314 10:09:11.348978 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f\": container with ID starting with c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f not found: ID does not exist" containerID="c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.349082 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f"} err="failed to get container status \"c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f\": rpc error: code = NotFound desc = could not find container \"c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f\": container with ID starting with c59ee51e121e10c5f72f679e6752c2a7944b5ba06ae38fc50d6c38c0ac9cd15f not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.694795 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.695063 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dvlq" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="registry-server" containerID="cri-o://cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00" gracePeriod=2 Mar 14 10:09:11 crc kubenswrapper[4687]: I0314 10:09:11.750538 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" path="/var/lib/kubelet/pods/07b19c0d-faa8-40bb-9bf5-c23dca044fb7/volumes" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.174442 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.232560 4687 generic.go:334] "Generic (PLEG): container finished" podID="db185519-0edd-4a06-9952-085ffcada74f" containerID="cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00" exitCode=0 Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.232627 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dvlq" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.232632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerDied","Data":"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00"} Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.232735 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dvlq" event={"ID":"db185519-0edd-4a06-9952-085ffcada74f","Type":"ContainerDied","Data":"c3f2d3fd69554bb9a7f0a78d2eda2ef01e4dd91fbeb1f8dcd9db4b3923856a80"} Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.232765 4687 scope.go:117] "RemoveContainer" containerID="cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.234100 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content\") pod \"db185519-0edd-4a06-9952-085ffcada74f\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.234215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities\") pod \"db185519-0edd-4a06-9952-085ffcada74f\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.234270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g2th\" (UniqueName: \"kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th\") pod \"db185519-0edd-4a06-9952-085ffcada74f\" (UID: \"db185519-0edd-4a06-9952-085ffcada74f\") " Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.235619 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities" (OuterVolumeSpecName: "utilities") pod "db185519-0edd-4a06-9952-085ffcada74f" (UID: "db185519-0edd-4a06-9952-085ffcada74f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.240061 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th" (OuterVolumeSpecName: "kube-api-access-7g2th") pod "db185519-0edd-4a06-9952-085ffcada74f" (UID: "db185519-0edd-4a06-9952-085ffcada74f"). InnerVolumeSpecName "kube-api-access-7g2th". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.253273 4687 scope.go:117] "RemoveContainer" containerID="67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.272454 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db185519-0edd-4a06-9952-085ffcada74f" (UID: "db185519-0edd-4a06-9952-085ffcada74f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.296068 4687 scope.go:117] "RemoveContainer" containerID="b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.318741 4687 scope.go:117] "RemoveContainer" containerID="cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00" Mar 14 10:09:12 crc kubenswrapper[4687]: E0314 10:09:12.320877 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00\": container with ID starting with cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00 not found: ID does not exist" containerID="cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.320965 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00"} err="failed to get container status \"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00\": rpc error: code = NotFound desc = could not find container \"cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00\": container with ID starting with cc0e827c097789e6a6ef9ecb64789e2a88a3084f78b88d80d0ca417b27a26a00 not found: ID does not exist" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.321112 4687 scope.go:117] "RemoveContainer" containerID="67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06" Mar 14 10:09:12 crc kubenswrapper[4687]: E0314 10:09:12.321654 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06\": container with ID starting with 67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06 not found: ID does not exist" containerID="67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.321696 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06"} err="failed to get container status \"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06\": rpc error: code = NotFound desc = could not find container \"67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06\": container with ID starting with 67eb578a05a56ef07644c12221e49a3eef14b950a5231693500a1cfe1d3d4e06 not found: ID does not exist" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.321727 4687 scope.go:117] "RemoveContainer" containerID="b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b" Mar 14 10:09:12 crc kubenswrapper[4687]: E0314 10:09:12.322160 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b\": container with ID starting with b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b not found: ID does not exist" containerID="b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.322244 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b"} err="failed to get container status \"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b\": rpc error: code = NotFound desc = could not find container \"b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b\": container with ID starting with b8bced8b74f992222778893d74d9cfb620882a4fc1c6e3e50ec4053bbbc2170b not found: ID does not exist" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.336790 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.336899 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db185519-0edd-4a06-9952-085ffcada74f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.336952 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g2th\" (UniqueName: \"kubernetes.io/projected/db185519-0edd-4a06-9952-085ffcada74f-kube-api-access-7g2th\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.575267 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:09:12 crc kubenswrapper[4687]: I0314 10:09:12.583550 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dvlq"] Mar 14 10:09:13 crc kubenswrapper[4687]: I0314 10:09:13.757149 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db185519-0edd-4a06-9952-085ffcada74f" path="/var/lib/kubelet/pods/db185519-0edd-4a06-9952-085ffcada74f/volumes" Mar 14 10:09:16 crc kubenswrapper[4687]: I0314 10:09:16.449525 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:09:16 crc kubenswrapper[4687]: I0314 10:09:16.520597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:09:17 crc kubenswrapper[4687]: I0314 10:09:17.500294 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:09:17 crc kubenswrapper[4687]: I0314 10:09:17.737760 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:09:17 crc kubenswrapper[4687]: E0314 10:09:17.738122 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.290285 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rhq4b" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" containerID="cri-o://81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac" gracePeriod=2 Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.819192 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.888834 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2plx\" (UniqueName: \"kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx\") pod \"db53d529-8271-405b-ae7b-78b666dde5ac\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.888888 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities\") pod \"db53d529-8271-405b-ae7b-78b666dde5ac\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.888978 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content\") pod \"db53d529-8271-405b-ae7b-78b666dde5ac\" (UID: \"db53d529-8271-405b-ae7b-78b666dde5ac\") " Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.889681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities" (OuterVolumeSpecName: "utilities") pod "db53d529-8271-405b-ae7b-78b666dde5ac" (UID: "db53d529-8271-405b-ae7b-78b666dde5ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.894756 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx" (OuterVolumeSpecName: "kube-api-access-j2plx") pod "db53d529-8271-405b-ae7b-78b666dde5ac" (UID: "db53d529-8271-405b-ae7b-78b666dde5ac"). InnerVolumeSpecName "kube-api-access-j2plx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.991117 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2plx\" (UniqueName: \"kubernetes.io/projected/db53d529-8271-405b-ae7b-78b666dde5ac-kube-api-access-j2plx\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:18 crc kubenswrapper[4687]: I0314 10:09:18.991154 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.028418 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db53d529-8271-405b-ae7b-78b666dde5ac" (UID: "db53d529-8271-405b-ae7b-78b666dde5ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.093351 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db53d529-8271-405b-ae7b-78b666dde5ac-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.305262 4687 generic.go:334] "Generic (PLEG): container finished" podID="db53d529-8271-405b-ae7b-78b666dde5ac" containerID="81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac" exitCode=0 Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.305315 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerDied","Data":"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac"} Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.305365 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhq4b" event={"ID":"db53d529-8271-405b-ae7b-78b666dde5ac","Type":"ContainerDied","Data":"13aaeebf8a57b17f251efb57420ce29e06262ed575755ca30085bdbb1c7d1587"} Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.305388 4687 scope.go:117] "RemoveContainer" containerID="81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.305546 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhq4b" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.338763 4687 scope.go:117] "RemoveContainer" containerID="60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.347169 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.354710 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rhq4b"] Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.383505 4687 scope.go:117] "RemoveContainer" containerID="a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.408528 4687 scope.go:117] "RemoveContainer" containerID="81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac" Mar 14 10:09:19 crc kubenswrapper[4687]: E0314 10:09:19.411729 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac\": container with ID starting with 81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac not found: ID does not exist" containerID="81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.411788 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac"} err="failed to get container status \"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac\": rpc error: code = NotFound desc = could not find container \"81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac\": container with ID starting with 81a6ff2aa1e7910e8ecc7590eddf5d7b0e8ee850c67dcdfd84052738f800e7ac not found: ID does not exist" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.411817 4687 scope.go:117] "RemoveContainer" containerID="60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9" Mar 14 10:09:19 crc kubenswrapper[4687]: E0314 10:09:19.412123 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9\": container with ID starting with 60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9 not found: ID does not exist" containerID="60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.412158 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9"} err="failed to get container status \"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9\": rpc error: code = NotFound desc = could not find container \"60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9\": container with ID starting with 60f1ece436d0e05dedff1fb2e5dcea1a7cc20cfaa0399cf4caa92360f2fb11d9 not found: ID does not exist" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.412183 4687 scope.go:117] "RemoveContainer" containerID="a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7" Mar 14 10:09:19 crc kubenswrapper[4687]: E0314 10:09:19.412443 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7\": container with ID starting with a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7 not found: ID does not exist" containerID="a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.412479 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7"} err="failed to get container status \"a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7\": rpc error: code = NotFound desc = could not find container \"a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7\": container with ID starting with a79dab554c69d535f534d17ea21bca015017b35c5c8e92801c1b4cc28ea860f7 not found: ID does not exist" Mar 14 10:09:19 crc kubenswrapper[4687]: I0314 10:09:19.750718 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" path="/var/lib/kubelet/pods/db53d529-8271-405b-ae7b-78b666dde5ac/volumes" Mar 14 10:09:20 crc kubenswrapper[4687]: I0314 10:09:20.736727 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:09:20 crc kubenswrapper[4687]: E0314 10:09:20.737188 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:09:24 crc kubenswrapper[4687]: I0314 10:09:24.110937 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:09:24 crc kubenswrapper[4687]: I0314 10:09:24.111289 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:09:31 crc kubenswrapper[4687]: I0314 10:09:31.736654 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:09:31 crc kubenswrapper[4687]: E0314 10:09:31.737517 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:09:31 crc kubenswrapper[4687]: I0314 10:09:31.737783 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:09:31 crc kubenswrapper[4687]: E0314 10:09:31.738594 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:09:42 crc kubenswrapper[4687]: I0314 10:09:42.737757 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:09:42 crc kubenswrapper[4687]: E0314 10:09:42.758617 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:09:45 crc kubenswrapper[4687]: I0314 10:09:45.747452 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:09:45 crc kubenswrapper[4687]: E0314 10:09:45.748427 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:09:53 crc kubenswrapper[4687]: I0314 10:09:53.736804 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:09:53 crc kubenswrapper[4687]: E0314 10:09:53.737441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.111211 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.111280 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.111329 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.112212 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.112316 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" gracePeriod=600 Mar 14 10:09:54 crc kubenswrapper[4687]: E0314 10:09:54.248374 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.659560 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" exitCode=0 Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.659614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c"} Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.659944 4687 scope.go:117] "RemoveContainer" containerID="79401918b97642ed55de0fdda7284179ccb9b7d84a97bf986744d96dd0b79ab9" Mar 14 10:09:54 crc kubenswrapper[4687]: I0314 10:09:54.661027 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:09:54 crc kubenswrapper[4687]: E0314 10:09:54.661785 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:09:58 crc kubenswrapper[4687]: I0314 10:09:58.737707 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:09:58 crc kubenswrapper[4687]: E0314 10:09:58.738931 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.144487 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558050-knl9t"] Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145151 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145164 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145189 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145196 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145215 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145221 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145232 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145237 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145251 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145256 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145269 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145275 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145289 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145294 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145307 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145313 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="extract-utilities" Mar 14 10:10:00 crc kubenswrapper[4687]: E0314 10:10:00.145326 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145349 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="extract-content" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145522 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="db53d529-8271-405b-ae7b-78b666dde5ac" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145542 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b19c0d-faa8-40bb-9bf5-c23dca044fb7" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.145555 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="db185519-0edd-4a06-9952-085ffcada74f" containerName="registry-server" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.146259 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.149909 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.149991 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.153842 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.165442 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-knl9t"] Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.278816 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjhc\" (UniqueName: \"kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc\") pod \"auto-csr-approver-29558050-knl9t\" (UID: \"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa\") " pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.381784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjhc\" (UniqueName: \"kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc\") pod \"auto-csr-approver-29558050-knl9t\" (UID: \"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa\") " pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.406266 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjhc\" (UniqueName: \"kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc\") pod \"auto-csr-approver-29558050-knl9t\" (UID: \"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa\") " pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.476399 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:00 crc kubenswrapper[4687]: I0314 10:10:00.962632 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-knl9t"] Mar 14 10:10:01 crc kubenswrapper[4687]: W0314 10:10:01.471155 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod027d7c8b_4a19_4deb_8b3e_cd3d9ebd3bfa.slice/crio-7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81 WatchSource:0}: Error finding container 7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81: Status 404 returned error can't find the container with id 7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81 Mar 14 10:10:01 crc kubenswrapper[4687]: I0314 10:10:01.474602 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:10:01 crc kubenswrapper[4687]: I0314 10:10:01.728994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-knl9t" event={"ID":"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa","Type":"ContainerStarted","Data":"7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81"} Mar 14 10:10:03 crc kubenswrapper[4687]: I0314 10:10:03.776354 4687 generic.go:334] "Generic (PLEG): container finished" podID="027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" containerID="7a357a8668da03e208fb538311f4412fc5f24caed2253ccb955b3ad919dcbe97" exitCode=0 Mar 14 10:10:03 crc kubenswrapper[4687]: I0314 10:10:03.776440 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-knl9t" event={"ID":"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa","Type":"ContainerDied","Data":"7a357a8668da03e208fb538311f4412fc5f24caed2253ccb955b3ad919dcbe97"} Mar 14 10:10:04 crc kubenswrapper[4687]: I0314 10:10:04.736946 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:10:04 crc kubenswrapper[4687]: E0314 10:10:04.737448 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.268481 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.435060 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjhc\" (UniqueName: \"kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc\") pod \"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa\" (UID: \"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa\") " Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.447653 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc" (OuterVolumeSpecName: "kube-api-access-mgjhc") pod "027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" (UID: "027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa"). InnerVolumeSpecName "kube-api-access-mgjhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.537788 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjhc\" (UniqueName: \"kubernetes.io/projected/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa-kube-api-access-mgjhc\") on node \"crc\" DevicePath \"\"" Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.792442 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-knl9t" event={"ID":"027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa","Type":"ContainerDied","Data":"7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81"} Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.792496 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9062fa67091abc62f6cfb899644cee9f7b830b1c5027760ad06ff65e47df81" Mar 14 10:10:05 crc kubenswrapper[4687]: I0314 10:10:05.792563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-knl9t" Mar 14 10:10:06 crc kubenswrapper[4687]: I0314 10:10:06.347835 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-7zzvs"] Mar 14 10:10:06 crc kubenswrapper[4687]: I0314 10:10:06.375438 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-7zzvs"] Mar 14 10:10:07 crc kubenswrapper[4687]: I0314 10:10:07.754520 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622c66fc-face-46ca-bcfc-12f45e038261" path="/var/lib/kubelet/pods/622c66fc-face-46ca-bcfc-12f45e038261/volumes" Mar 14 10:10:08 crc kubenswrapper[4687]: I0314 10:10:08.737176 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:10:08 crc kubenswrapper[4687]: E0314 10:10:08.737833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:10:09 crc kubenswrapper[4687]: I0314 10:10:09.744230 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:10:09 crc kubenswrapper[4687]: E0314 10:10:09.744520 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:10:15 crc kubenswrapper[4687]: I0314 10:10:15.746010 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:10:15 crc kubenswrapper[4687]: E0314 10:10:15.747276 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:10:20 crc kubenswrapper[4687]: I0314 10:10:20.737010 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:10:20 crc kubenswrapper[4687]: I0314 10:10:20.737546 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:10:20 crc kubenswrapper[4687]: E0314 10:10:20.737748 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:10:20 crc kubenswrapper[4687]: E0314 10:10:20.737748 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:10:30 crc kubenswrapper[4687]: I0314 10:10:30.737864 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:10:30 crc kubenswrapper[4687]: E0314 10:10:30.738895 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:10:31 crc kubenswrapper[4687]: I0314 10:10:31.261055 4687 scope.go:117] "RemoveContainer" containerID="7ca9d91e1e6e9463a4178d9ffa1094e3c1d16152a5cb14b8773c68f6a3ed6552" Mar 14 10:10:31 crc kubenswrapper[4687]: I0314 10:10:31.737467 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:10:31 crc kubenswrapper[4687]: E0314 10:10:31.737815 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:10:32 crc kubenswrapper[4687]: I0314 10:10:32.737171 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:10:32 crc kubenswrapper[4687]: E0314 10:10:32.737640 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:10:43 crc kubenswrapper[4687]: I0314 10:10:43.737163 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:10:43 crc kubenswrapper[4687]: E0314 10:10:43.737871 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:10:44 crc kubenswrapper[4687]: I0314 10:10:44.737999 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:10:44 crc kubenswrapper[4687]: E0314 10:10:44.738539 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:10:46 crc kubenswrapper[4687]: I0314 10:10:46.737940 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:10:46 crc kubenswrapper[4687]: E0314 10:10:46.738761 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:10:56 crc kubenswrapper[4687]: I0314 10:10:56.737056 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:10:56 crc kubenswrapper[4687]: E0314 10:10:56.737733 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:10:58 crc kubenswrapper[4687]: I0314 10:10:58.737380 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:10:58 crc kubenswrapper[4687]: E0314 10:10:58.737973 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:11:00 crc kubenswrapper[4687]: I0314 10:11:00.736868 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:11:00 crc kubenswrapper[4687]: E0314 10:11:00.737481 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:11:09 crc kubenswrapper[4687]: I0314 10:11:09.737025 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:11:09 crc kubenswrapper[4687]: I0314 10:11:09.737627 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:11:09 crc kubenswrapper[4687]: E0314 10:11:09.737785 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:11:09 crc kubenswrapper[4687]: E0314 10:11:09.737903 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:11:12 crc kubenswrapper[4687]: I0314 10:11:12.736728 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:11:12 crc kubenswrapper[4687]: E0314 10:11:12.737269 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:11:20 crc kubenswrapper[4687]: I0314 10:11:20.737666 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:11:20 crc kubenswrapper[4687]: E0314 10:11:20.738300 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:11:21 crc kubenswrapper[4687]: I0314 10:11:21.737558 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:11:21 crc kubenswrapper[4687]: E0314 10:11:21.738096 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:11:24 crc kubenswrapper[4687]: I0314 10:11:24.737780 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:11:24 crc kubenswrapper[4687]: E0314 10:11:24.738577 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:11:32 crc kubenswrapper[4687]: I0314 10:11:32.737695 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:11:32 crc kubenswrapper[4687]: E0314 10:11:32.739524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:11:33 crc kubenswrapper[4687]: I0314 10:11:33.737681 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:11:33 crc kubenswrapper[4687]: E0314 10:11:33.737875 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:11:35 crc kubenswrapper[4687]: I0314 10:11:35.743232 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:11:35 crc kubenswrapper[4687]: E0314 10:11:35.744076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:11:44 crc kubenswrapper[4687]: I0314 10:11:44.737893 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:11:44 crc kubenswrapper[4687]: E0314 10:11:44.739079 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:11:47 crc kubenswrapper[4687]: I0314 10:11:47.737092 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:11:47 crc kubenswrapper[4687]: I0314 10:11:47.737793 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:11:47 crc kubenswrapper[4687]: E0314 10:11:47.738150 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:11:47 crc kubenswrapper[4687]: E0314 10:11:47.738171 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:11:59 crc kubenswrapper[4687]: I0314 10:11:59.737499 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:11:59 crc kubenswrapper[4687]: E0314 10:11:59.738148 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:11:59 crc kubenswrapper[4687]: I0314 10:11:59.738302 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:11:59 crc kubenswrapper[4687]: E0314 10:11:59.738524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.150057 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558052-mm5bk"] Mar 14 10:12:00 crc kubenswrapper[4687]: E0314 10:12:00.150821 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.150911 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.151200 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.152031 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.155518 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.155829 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.156013 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.167160 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-mm5bk"] Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.290194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspzr\" (UniqueName: \"kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr\") pod \"auto-csr-approver-29558052-mm5bk\" (UID: \"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8\") " pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.392919 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspzr\" (UniqueName: \"kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr\") pod \"auto-csr-approver-29558052-mm5bk\" (UID: \"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8\") " pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.412095 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspzr\" (UniqueName: \"kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr\") pod \"auto-csr-approver-29558052-mm5bk\" (UID: \"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8\") " pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.481545 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.737174 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:00 crc kubenswrapper[4687]: E0314 10:12:00.737509 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.922136 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-mm5bk"] Mar 14 10:12:00 crc kubenswrapper[4687]: I0314 10:12:00.993400 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" event={"ID":"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8","Type":"ContainerStarted","Data":"2446792d1dbc2371b33d322caa1f626f9f3a6f603625c2a1cc36e0107b9c49c2"} Mar 14 10:12:03 crc kubenswrapper[4687]: I0314 10:12:03.016262 4687 generic.go:334] "Generic (PLEG): container finished" podID="be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" containerID="0ea0a231dcd227d252e3c2fa79aff81b20409a8902b1d63d70a416e74a355acc" exitCode=0 Mar 14 10:12:03 crc kubenswrapper[4687]: I0314 10:12:03.016372 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" event={"ID":"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8","Type":"ContainerDied","Data":"0ea0a231dcd227d252e3c2fa79aff81b20409a8902b1d63d70a416e74a355acc"} Mar 14 10:12:04 crc kubenswrapper[4687]: I0314 10:12:04.340472 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:04 crc kubenswrapper[4687]: I0314 10:12:04.376518 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zspzr\" (UniqueName: \"kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr\") pod \"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8\" (UID: \"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8\") " Mar 14 10:12:04 crc kubenswrapper[4687]: I0314 10:12:04.381966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr" (OuterVolumeSpecName: "kube-api-access-zspzr") pod "be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" (UID: "be964cc5-c7af-4035-9d4b-c2d6aaecbcc8"). InnerVolumeSpecName "kube-api-access-zspzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:12:04 crc kubenswrapper[4687]: I0314 10:12:04.479079 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zspzr\" (UniqueName: \"kubernetes.io/projected/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8-kube-api-access-zspzr\") on node \"crc\" DevicePath \"\"" Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.033918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" event={"ID":"be964cc5-c7af-4035-9d4b-c2d6aaecbcc8","Type":"ContainerDied","Data":"2446792d1dbc2371b33d322caa1f626f9f3a6f603625c2a1cc36e0107b9c49c2"} Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.034234 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2446792d1dbc2371b33d322caa1f626f9f3a6f603625c2a1cc36e0107b9c49c2" Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.033987 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-mm5bk" Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.405938 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-4nb2g"] Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.413007 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-4nb2g"] Mar 14 10:12:05 crc kubenswrapper[4687]: I0314 10:12:05.751408 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74aa8607-7870-47c0-baa4-eb2df06d999c" path="/var/lib/kubelet/pods/74aa8607-7870-47c0-baa4-eb2df06d999c/volumes" Mar 14 10:12:12 crc kubenswrapper[4687]: I0314 10:12:12.737527 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:12:12 crc kubenswrapper[4687]: E0314 10:12:12.738691 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:12:13 crc kubenswrapper[4687]: I0314 10:12:13.737442 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:12:13 crc kubenswrapper[4687]: I0314 10:12:13.737934 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:13 crc kubenswrapper[4687]: E0314 10:12:13.738015 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:12:13 crc kubenswrapper[4687]: E0314 10:12:13.738968 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:12:24 crc kubenswrapper[4687]: I0314 10:12:24.739063 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:24 crc kubenswrapper[4687]: E0314 10:12:24.740613 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:12:27 crc kubenswrapper[4687]: I0314 10:12:27.739144 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:12:27 crc kubenswrapper[4687]: E0314 10:12:27.740728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:12:28 crc kubenswrapper[4687]: I0314 10:12:28.737164 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:12:28 crc kubenswrapper[4687]: E0314 10:12:28.737796 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:12:31 crc kubenswrapper[4687]: I0314 10:12:31.385379 4687 scope.go:117] "RemoveContainer" containerID="d177d1a29b21559694665b14eae2a78160120782da0767dc9d84238c93d3355d" Mar 14 10:12:35 crc kubenswrapper[4687]: I0314 10:12:35.748248 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:35 crc kubenswrapper[4687]: E0314 10:12:35.749282 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:12:40 crc kubenswrapper[4687]: I0314 10:12:40.738535 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:12:40 crc kubenswrapper[4687]: E0314 10:12:40.739291 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:12:41 crc kubenswrapper[4687]: I0314 10:12:41.737322 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:12:41 crc kubenswrapper[4687]: E0314 10:12:41.737769 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:12:48 crc kubenswrapper[4687]: I0314 10:12:48.736987 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:48 crc kubenswrapper[4687]: E0314 10:12:48.737620 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:12:52 crc kubenswrapper[4687]: I0314 10:12:52.737604 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:12:52 crc kubenswrapper[4687]: E0314 10:12:52.738671 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:12:53 crc kubenswrapper[4687]: I0314 10:12:53.737353 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:12:53 crc kubenswrapper[4687]: E0314 10:12:53.737781 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:12:59 crc kubenswrapper[4687]: I0314 10:12:59.737693 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:12:59 crc kubenswrapper[4687]: E0314 10:12:59.738727 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:13:06 crc kubenswrapper[4687]: I0314 10:13:06.736821 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:13:06 crc kubenswrapper[4687]: E0314 10:13:06.737863 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:07 crc kubenswrapper[4687]: I0314 10:13:07.737929 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:13:07 crc kubenswrapper[4687]: E0314 10:13:07.738761 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:13:14 crc kubenswrapper[4687]: I0314 10:13:14.737058 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:13:15 crc kubenswrapper[4687]: I0314 10:13:15.266757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7"} Mar 14 10:13:19 crc kubenswrapper[4687]: I0314 10:13:19.737018 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:13:19 crc kubenswrapper[4687]: E0314 10:13:19.737695 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:21 crc kubenswrapper[4687]: I0314 10:13:21.737789 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:13:21 crc kubenswrapper[4687]: E0314 10:13:21.738900 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:13:22 crc kubenswrapper[4687]: I0314 10:13:22.219892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:13:22 crc kubenswrapper[4687]: I0314 10:13:22.220323 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:13:23 crc kubenswrapper[4687]: I0314 10:13:23.345950 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" exitCode=1 Mar 14 10:13:23 crc kubenswrapper[4687]: I0314 10:13:23.346018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7"} Mar 14 10:13:23 crc kubenswrapper[4687]: I0314 10:13:23.346253 4687 scope.go:117] "RemoveContainer" containerID="d804e519e739939a93c1a9d7c52e1863214f967812fc36cbb3788243d05eb01e" Mar 14 10:13:23 crc kubenswrapper[4687]: I0314 10:13:23.346676 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:13:23 crc kubenswrapper[4687]: E0314 10:13:23.346968 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:13:30 crc kubenswrapper[4687]: I0314 10:13:30.737490 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:13:31 crc kubenswrapper[4687]: I0314 10:13:31.439651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a"} Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.128096 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.128152 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.220207 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.220974 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:13:32 crc kubenswrapper[4687]: E0314 10:13:32.221180 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.221324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.449965 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:13:32 crc kubenswrapper[4687]: E0314 10:13:32.450279 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:13:32 crc kubenswrapper[4687]: I0314 10:13:32.737522 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:13:32 crc kubenswrapper[4687]: E0314 10:13:32.737981 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:13:39 crc kubenswrapper[4687]: I0314 10:13:39.515674 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" exitCode=1 Mar 14 10:13:39 crc kubenswrapper[4687]: I0314 10:13:39.515758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a"} Mar 14 10:13:39 crc kubenswrapper[4687]: I0314 10:13:39.516171 4687 scope.go:117] "RemoveContainer" containerID="df588048124ab17fb6f940bbe4b923abb985b85dc55052210f5e014a7f4d92a7" Mar 14 10:13:39 crc kubenswrapper[4687]: I0314 10:13:39.516948 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:13:39 crc kubenswrapper[4687]: E0314 10:13:39.517195 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:42 crc kubenswrapper[4687]: I0314 10:13:42.128311 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:13:42 crc kubenswrapper[4687]: I0314 10:13:42.129159 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:13:42 crc kubenswrapper[4687]: I0314 10:13:42.129651 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:13:42 crc kubenswrapper[4687]: E0314 10:13:42.129875 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:42 crc kubenswrapper[4687]: I0314 10:13:42.551123 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:13:42 crc kubenswrapper[4687]: E0314 10:13:42.551686 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:46 crc kubenswrapper[4687]: I0314 10:13:46.737009 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:13:46 crc kubenswrapper[4687]: I0314 10:13:46.737623 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:13:46 crc kubenswrapper[4687]: E0314 10:13:46.737791 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:13:46 crc kubenswrapper[4687]: E0314 10:13:46.737886 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:13:54 crc kubenswrapper[4687]: I0314 10:13:54.737150 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:13:54 crc kubenswrapper[4687]: E0314 10:13:54.738004 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:13:58 crc kubenswrapper[4687]: I0314 10:13:58.068324 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:13:58 crc kubenswrapper[4687]: E0314 10:13:58.071859 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.152817 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558054-xbn4t"] Mar 14 10:14:00 crc kubenswrapper[4687]: E0314 10:14:00.153447 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.153460 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.153679 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.154939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.162443 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.162862 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.165316 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.169164 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-xbn4t"] Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.262459 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csn2\" (UniqueName: \"kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2\") pod \"auto-csr-approver-29558054-xbn4t\" (UID: \"18407777-1d9d-47fd-b34e-567a0066c2c3\") " pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.364755 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csn2\" (UniqueName: \"kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2\") pod \"auto-csr-approver-29558054-xbn4t\" (UID: \"18407777-1d9d-47fd-b34e-567a0066c2c3\") " pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.388543 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csn2\" (UniqueName: \"kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2\") pod \"auto-csr-approver-29558054-xbn4t\" (UID: \"18407777-1d9d-47fd-b34e-567a0066c2c3\") " pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.501249 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:00 crc kubenswrapper[4687]: I0314 10:14:00.738278 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:14:00 crc kubenswrapper[4687]: E0314 10:14:00.738973 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:14:01 crc kubenswrapper[4687]: I0314 10:14:01.000261 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-xbn4t"] Mar 14 10:14:01 crc kubenswrapper[4687]: I0314 10:14:01.449147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" event={"ID":"18407777-1d9d-47fd-b34e-567a0066c2c3","Type":"ContainerStarted","Data":"b805c77c54ced9d7c5fe66d2ec0571ac2e5b30589043cd22bff31c5096b8e0a7"} Mar 14 10:14:02 crc kubenswrapper[4687]: I0314 10:14:02.463793 4687 generic.go:334] "Generic (PLEG): container finished" podID="18407777-1d9d-47fd-b34e-567a0066c2c3" containerID="11933ae8189e74fae4f70c6f71e19b6a5d0b6d05700cd6b2110b285ad8070bdd" exitCode=0 Mar 14 10:14:02 crc kubenswrapper[4687]: I0314 10:14:02.463862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" event={"ID":"18407777-1d9d-47fd-b34e-567a0066c2c3","Type":"ContainerDied","Data":"11933ae8189e74fae4f70c6f71e19b6a5d0b6d05700cd6b2110b285ad8070bdd"} Mar 14 10:14:03 crc kubenswrapper[4687]: I0314 10:14:03.888148 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:03 crc kubenswrapper[4687]: I0314 10:14:03.949511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7csn2\" (UniqueName: \"kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2\") pod \"18407777-1d9d-47fd-b34e-567a0066c2c3\" (UID: \"18407777-1d9d-47fd-b34e-567a0066c2c3\") " Mar 14 10:14:03 crc kubenswrapper[4687]: I0314 10:14:03.960932 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2" (OuterVolumeSpecName: "kube-api-access-7csn2") pod "18407777-1d9d-47fd-b34e-567a0066c2c3" (UID: "18407777-1d9d-47fd-b34e-567a0066c2c3"). InnerVolumeSpecName "kube-api-access-7csn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:14:04 crc kubenswrapper[4687]: I0314 10:14:04.054424 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7csn2\" (UniqueName: \"kubernetes.io/projected/18407777-1d9d-47fd-b34e-567a0066c2c3-kube-api-access-7csn2\") on node \"crc\" DevicePath \"\"" Mar 14 10:14:04 crc kubenswrapper[4687]: I0314 10:14:04.490042 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" event={"ID":"18407777-1d9d-47fd-b34e-567a0066c2c3","Type":"ContainerDied","Data":"b805c77c54ced9d7c5fe66d2ec0571ac2e5b30589043cd22bff31c5096b8e0a7"} Mar 14 10:14:04 crc kubenswrapper[4687]: I0314 10:14:04.490420 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b805c77c54ced9d7c5fe66d2ec0571ac2e5b30589043cd22bff31c5096b8e0a7" Mar 14 10:14:04 crc kubenswrapper[4687]: I0314 10:14:04.490105 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-xbn4t" Mar 14 10:14:05 crc kubenswrapper[4687]: I0314 10:14:05.003164 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-czrm4"] Mar 14 10:14:05 crc kubenswrapper[4687]: I0314 10:14:05.014389 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-czrm4"] Mar 14 10:14:05 crc kubenswrapper[4687]: I0314 10:14:05.747122 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd607b60-570e-4d98-bec8-3deffde8123a" path="/var/lib/kubelet/pods/cd607b60-570e-4d98-bec8-3deffde8123a/volumes" Mar 14 10:14:07 crc kubenswrapper[4687]: I0314 10:14:07.736809 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:14:07 crc kubenswrapper[4687]: E0314 10:14:07.737248 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:14:10 crc kubenswrapper[4687]: I0314 10:14:10.738550 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:14:10 crc kubenswrapper[4687]: E0314 10:14:10.739051 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:14:15 crc kubenswrapper[4687]: I0314 10:14:15.749400 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:14:15 crc kubenswrapper[4687]: E0314 10:14:15.750315 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:14:19 crc kubenswrapper[4687]: I0314 10:14:19.736468 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:14:19 crc kubenswrapper[4687]: E0314 10:14:19.737092 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:14:25 crc kubenswrapper[4687]: I0314 10:14:25.746676 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:14:25 crc kubenswrapper[4687]: E0314 10:14:25.748200 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:14:30 crc kubenswrapper[4687]: I0314 10:14:30.738012 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:14:30 crc kubenswrapper[4687]: E0314 10:14:30.739044 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:14:31 crc kubenswrapper[4687]: I0314 10:14:31.489497 4687 scope.go:117] "RemoveContainer" containerID="a34b0e4edc3f29923b426c859e25dea860f7dd1d11e1932218163ae343d03886" Mar 14 10:14:34 crc kubenswrapper[4687]: I0314 10:14:34.739683 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:14:34 crc kubenswrapper[4687]: E0314 10:14:34.741182 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:14:39 crc kubenswrapper[4687]: I0314 10:14:39.737662 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:14:39 crc kubenswrapper[4687]: E0314 10:14:39.738393 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:14:43 crc kubenswrapper[4687]: I0314 10:14:43.737327 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:14:43 crc kubenswrapper[4687]: E0314 10:14:43.738330 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:14:48 crc kubenswrapper[4687]: I0314 10:14:48.737948 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:14:48 crc kubenswrapper[4687]: E0314 10:14:48.738807 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:14:51 crc kubenswrapper[4687]: I0314 10:14:51.737258 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:14:51 crc kubenswrapper[4687]: E0314 10:14:51.738120 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:14:56 crc kubenswrapper[4687]: I0314 10:14:56.737617 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:14:56 crc kubenswrapper[4687]: E0314 10:14:56.738386 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.158664 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw"] Mar 14 10:15:00 crc kubenswrapper[4687]: E0314 10:15:00.159604 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18407777-1d9d-47fd-b34e-567a0066c2c3" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.159620 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="18407777-1d9d-47fd-b34e-567a0066c2c3" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.159804 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="18407777-1d9d-47fd-b34e-567a0066c2c3" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.160491 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.165424 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.173593 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.181325 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw"] Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.347602 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.347810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbtr\" (UniqueName: \"kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.348080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.450316 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.450548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbtr\" (UniqueName: \"kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.450685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.451837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.458538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.482809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbtr\" (UniqueName: \"kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr\") pod \"collect-profiles-29558055-8gfsw\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.512082 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:00 crc kubenswrapper[4687]: I0314 10:15:00.987832 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw"] Mar 14 10:15:01 crc kubenswrapper[4687]: I0314 10:15:01.130015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" event={"ID":"0cac955f-ed86-4cee-a560-9eefec086d56","Type":"ContainerStarted","Data":"b7610fb77f2a9d7ccad7e46476ccf05f6ca3a4b3c421e5e813f5c52a94afbb84"} Mar 14 10:15:02 crc kubenswrapper[4687]: I0314 10:15:02.145872 4687 generic.go:334] "Generic (PLEG): container finished" podID="0cac955f-ed86-4cee-a560-9eefec086d56" containerID="77bf75911634e7286e4a0142bb67dbea80ef018b39aeda542c0c3b938cab826d" exitCode=0 Mar 14 10:15:02 crc kubenswrapper[4687]: I0314 10:15:02.146103 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" event={"ID":"0cac955f-ed86-4cee-a560-9eefec086d56","Type":"ContainerDied","Data":"77bf75911634e7286e4a0142bb67dbea80ef018b39aeda542c0c3b938cab826d"} Mar 14 10:15:02 crc kubenswrapper[4687]: I0314 10:15:02.737237 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:15:02 crc kubenswrapper[4687]: I0314 10:15:02.737865 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:15:02 crc kubenswrapper[4687]: E0314 10:15:02.738438 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.160981 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3"} Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.516499 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.644500 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tbtr\" (UniqueName: \"kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr\") pod \"0cac955f-ed86-4cee-a560-9eefec086d56\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.644668 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume\") pod \"0cac955f-ed86-4cee-a560-9eefec086d56\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.644891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume\") pod \"0cac955f-ed86-4cee-a560-9eefec086d56\" (UID: \"0cac955f-ed86-4cee-a560-9eefec086d56\") " Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.646061 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume" (OuterVolumeSpecName: "config-volume") pod "0cac955f-ed86-4cee-a560-9eefec086d56" (UID: "0cac955f-ed86-4cee-a560-9eefec086d56"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.660764 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0cac955f-ed86-4cee-a560-9eefec086d56" (UID: "0cac955f-ed86-4cee-a560-9eefec086d56"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.660974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr" (OuterVolumeSpecName: "kube-api-access-4tbtr") pod "0cac955f-ed86-4cee-a560-9eefec086d56" (UID: "0cac955f-ed86-4cee-a560-9eefec086d56"). InnerVolumeSpecName "kube-api-access-4tbtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.748473 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cac955f-ed86-4cee-a560-9eefec086d56-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.748553 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tbtr\" (UniqueName: \"kubernetes.io/projected/0cac955f-ed86-4cee-a560-9eefec086d56-kube-api-access-4tbtr\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:03 crc kubenswrapper[4687]: I0314 10:15:03.748655 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cac955f-ed86-4cee-a560-9eefec086d56-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:04 crc kubenswrapper[4687]: I0314 10:15:04.175210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" event={"ID":"0cac955f-ed86-4cee-a560-9eefec086d56","Type":"ContainerDied","Data":"b7610fb77f2a9d7ccad7e46476ccf05f6ca3a4b3c421e5e813f5c52a94afbb84"} Mar 14 10:15:04 crc kubenswrapper[4687]: I0314 10:15:04.176438 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7610fb77f2a9d7ccad7e46476ccf05f6ca3a4b3c421e5e813f5c52a94afbb84" Mar 14 10:15:04 crc kubenswrapper[4687]: I0314 10:15:04.176658 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-8gfsw" Mar 14 10:15:04 crc kubenswrapper[4687]: I0314 10:15:04.635864 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7"] Mar 14 10:15:04 crc kubenswrapper[4687]: I0314 10:15:04.646711 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-xkqf7"] Mar 14 10:15:05 crc kubenswrapper[4687]: I0314 10:15:05.761002 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857aaacc-2144-4712-aa3d-d0e5198b96ca" path="/var/lib/kubelet/pods/857aaacc-2144-4712-aa3d-d0e5198b96ca/volumes" Mar 14 10:15:07 crc kubenswrapper[4687]: I0314 10:15:07.738151 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:15:07 crc kubenswrapper[4687]: E0314 10:15:07.739691 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:17 crc kubenswrapper[4687]: I0314 10:15:17.737469 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:15:17 crc kubenswrapper[4687]: E0314 10:15:17.738159 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:15:20 crc kubenswrapper[4687]: I0314 10:15:20.737555 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:15:20 crc kubenswrapper[4687]: E0314 10:15:20.738038 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:31 crc kubenswrapper[4687]: I0314 10:15:31.579421 4687 scope.go:117] "RemoveContainer" containerID="dde34f00655c7c50a9c978cb3ee6ca5a1af90a99d8b129f5739bc565ebb07f41" Mar 14 10:15:31 crc kubenswrapper[4687]: I0314 10:15:31.737885 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:15:31 crc kubenswrapper[4687]: E0314 10:15:31.738159 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.469299 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:32 crc kubenswrapper[4687]: E0314 10:15:32.469736 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cac955f-ed86-4cee-a560-9eefec086d56" containerName="collect-profiles" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.469754 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cac955f-ed86-4cee-a560-9eefec086d56" containerName="collect-profiles" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.469940 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cac955f-ed86-4cee-a560-9eefec086d56" containerName="collect-profiles" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.471316 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.487477 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.549906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqflv\" (UniqueName: \"kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.550125 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.550235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.652692 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqflv\" (UniqueName: \"kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.652836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.653295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.652872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.653384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.672167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqflv\" (UniqueName: \"kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv\") pod \"certified-operators-wdb9j\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.740666 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:15:32 crc kubenswrapper[4687]: E0314 10:15:32.741224 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:15:32 crc kubenswrapper[4687]: I0314 10:15:32.829179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:33 crc kubenswrapper[4687]: W0314 10:15:33.403289 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a798bd_9ef5_4d67_b632_f36c959d349d.slice/crio-00014315224fdd1eb8f88ac5abfbb62765ec7f26b1445b35abad599e65e621cd WatchSource:0}: Error finding container 00014315224fdd1eb8f88ac5abfbb62765ec7f26b1445b35abad599e65e621cd: Status 404 returned error can't find the container with id 00014315224fdd1eb8f88ac5abfbb62765ec7f26b1445b35abad599e65e621cd Mar 14 10:15:33 crc kubenswrapper[4687]: I0314 10:15:33.406606 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:33 crc kubenswrapper[4687]: I0314 10:15:33.545099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerStarted","Data":"00014315224fdd1eb8f88ac5abfbb62765ec7f26b1445b35abad599e65e621cd"} Mar 14 10:15:34 crc kubenswrapper[4687]: I0314 10:15:34.559747 4687 generic.go:334] "Generic (PLEG): container finished" podID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerID="ff0e9ab30dd473d00ea530c21306bc2d96ba0b328d55e90491c14bc19983f310" exitCode=0 Mar 14 10:15:34 crc kubenswrapper[4687]: I0314 10:15:34.559901 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerDied","Data":"ff0e9ab30dd473d00ea530c21306bc2d96ba0b328d55e90491c14bc19983f310"} Mar 14 10:15:34 crc kubenswrapper[4687]: I0314 10:15:34.563259 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:15:35 crc kubenswrapper[4687]: I0314 10:15:35.574264 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerStarted","Data":"81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb"} Mar 14 10:15:36 crc kubenswrapper[4687]: I0314 10:15:36.587097 4687 generic.go:334] "Generic (PLEG): container finished" podID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerID="81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb" exitCode=0 Mar 14 10:15:36 crc kubenswrapper[4687]: I0314 10:15:36.587158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerDied","Data":"81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb"} Mar 14 10:15:36 crc kubenswrapper[4687]: E0314 10:15:36.659880 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a798bd_9ef5_4d67_b632_f36c959d349d.slice/crio-81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a798bd_9ef5_4d67_b632_f36c959d349d.slice/crio-conmon-81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb.scope\": RecentStats: unable to find data in memory cache]" Mar 14 10:15:37 crc kubenswrapper[4687]: I0314 10:15:37.599619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerStarted","Data":"e593873ac67103243bac216464b737686825fd59df91455615756267bc5c4d3f"} Mar 14 10:15:37 crc kubenswrapper[4687]: I0314 10:15:37.626094 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdb9j" podStartSLOduration=3.213054389 podStartE2EDuration="5.626079676s" podCreationTimestamp="2026-03-14 10:15:32 +0000 UTC" firstStartedPulling="2026-03-14 10:15:34.562841118 +0000 UTC m=+4719.551081533" lastFinishedPulling="2026-03-14 10:15:36.975866405 +0000 UTC m=+4721.964106820" observedRunningTime="2026-03-14 10:15:37.623919743 +0000 UTC m=+4722.612160128" watchObservedRunningTime="2026-03-14 10:15:37.626079676 +0000 UTC m=+4722.614320051" Mar 14 10:15:42 crc kubenswrapper[4687]: I0314 10:15:42.829981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:42 crc kubenswrapper[4687]: I0314 10:15:42.830684 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:42 crc kubenswrapper[4687]: I0314 10:15:42.895318 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:43 crc kubenswrapper[4687]: I0314 10:15:43.735462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:43 crc kubenswrapper[4687]: I0314 10:15:43.807209 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:44 crc kubenswrapper[4687]: I0314 10:15:44.737546 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:15:44 crc kubenswrapper[4687]: E0314 10:15:44.738087 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:44 crc kubenswrapper[4687]: I0314 10:15:44.738662 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:15:44 crc kubenswrapper[4687]: E0314 10:15:44.738842 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:15:45 crc kubenswrapper[4687]: I0314 10:15:45.690675 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wdb9j" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="registry-server" containerID="cri-o://e593873ac67103243bac216464b737686825fd59df91455615756267bc5c4d3f" gracePeriod=2 Mar 14 10:15:46 crc kubenswrapper[4687]: I0314 10:15:46.708423 4687 generic.go:334] "Generic (PLEG): container finished" podID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerID="e593873ac67103243bac216464b737686825fd59df91455615756267bc5c4d3f" exitCode=0 Mar 14 10:15:46 crc kubenswrapper[4687]: I0314 10:15:46.708549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerDied","Data":"e593873ac67103243bac216464b737686825fd59df91455615756267bc5c4d3f"} Mar 14 10:15:46 crc kubenswrapper[4687]: I0314 10:15:46.990881 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.006808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities\") pod \"62a798bd-9ef5-4d67-b632-f36c959d349d\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.007866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities" (OuterVolumeSpecName: "utilities") pod "62a798bd-9ef5-4d67-b632-f36c959d349d" (UID: "62a798bd-9ef5-4d67-b632-f36c959d349d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.008447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content\") pod \"62a798bd-9ef5-4d67-b632-f36c959d349d\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.008579 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqflv\" (UniqueName: \"kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv\") pod \"62a798bd-9ef5-4d67-b632-f36c959d349d\" (UID: \"62a798bd-9ef5-4d67-b632-f36c959d349d\") " Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.009772 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.018470 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv" (OuterVolumeSpecName: "kube-api-access-hqflv") pod "62a798bd-9ef5-4d67-b632-f36c959d349d" (UID: "62a798bd-9ef5-4d67-b632-f36c959d349d"). InnerVolumeSpecName "kube-api-access-hqflv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.062549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62a798bd-9ef5-4d67-b632-f36c959d349d" (UID: "62a798bd-9ef5-4d67-b632-f36c959d349d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.111422 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a798bd-9ef5-4d67-b632-f36c959d349d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.111459 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqflv\" (UniqueName: \"kubernetes.io/projected/62a798bd-9ef5-4d67-b632-f36c959d349d-kube-api-access-hqflv\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.723606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdb9j" event={"ID":"62a798bd-9ef5-4d67-b632-f36c959d349d","Type":"ContainerDied","Data":"00014315224fdd1eb8f88ac5abfbb62765ec7f26b1445b35abad599e65e621cd"} Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.724073 4687 scope.go:117] "RemoveContainer" containerID="e593873ac67103243bac216464b737686825fd59df91455615756267bc5c4d3f" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.723729 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdb9j" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.749822 4687 scope.go:117] "RemoveContainer" containerID="81074ef32cf16a7d31359a9afc1db480fe2e31209a9d6a4f0355342fcdfe81cb" Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.768071 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.777440 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wdb9j"] Mar 14 10:15:47 crc kubenswrapper[4687]: I0314 10:15:47.793757 4687 scope.go:117] "RemoveContainer" containerID="ff0e9ab30dd473d00ea530c21306bc2d96ba0b328d55e90491c14bc19983f310" Mar 14 10:15:49 crc kubenswrapper[4687]: I0314 10:15:49.747467 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" path="/var/lib/kubelet/pods/62a798bd-9ef5-4d67-b632-f36c959d349d/volumes" Mar 14 10:15:55 crc kubenswrapper[4687]: I0314 10:15:55.746289 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:15:55 crc kubenswrapper[4687]: E0314 10:15:55.747226 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:15:57 crc kubenswrapper[4687]: I0314 10:15:57.737967 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:15:57 crc kubenswrapper[4687]: E0314 10:15:57.738828 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.144430 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558056-bmzfv"] Mar 14 10:16:00 crc kubenswrapper[4687]: E0314 10:16:00.144981 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="extract-content" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.144992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="extract-content" Mar 14 10:16:00 crc kubenswrapper[4687]: E0314 10:16:00.145020 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="registry-server" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.145026 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="registry-server" Mar 14 10:16:00 crc kubenswrapper[4687]: E0314 10:16:00.145042 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="extract-utilities" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.145049 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="extract-utilities" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.145255 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a798bd-9ef5-4d67-b632-f36c959d349d" containerName="registry-server" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.145851 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.148857 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.149794 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.150242 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.161291 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-bmzfv"] Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.192530 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgdsm\" (UniqueName: \"kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm\") pod \"auto-csr-approver-29558056-bmzfv\" (UID: \"90232b1d-0f84-4fe9-882f-382f3ceb76a8\") " pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.295018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgdsm\" (UniqueName: \"kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm\") pod \"auto-csr-approver-29558056-bmzfv\" (UID: \"90232b1d-0f84-4fe9-882f-382f3ceb76a8\") " pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.316389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgdsm\" (UniqueName: \"kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm\") pod \"auto-csr-approver-29558056-bmzfv\" (UID: \"90232b1d-0f84-4fe9-882f-382f3ceb76a8\") " pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.467924 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:00 crc kubenswrapper[4687]: I0314 10:16:00.914730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-bmzfv"] Mar 14 10:16:01 crc kubenswrapper[4687]: I0314 10:16:01.876556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" event={"ID":"90232b1d-0f84-4fe9-882f-382f3ceb76a8","Type":"ContainerStarted","Data":"562adc07770f67a29c85be9a9e77fe8c87501193852185268f7217f8b816a101"} Mar 14 10:16:02 crc kubenswrapper[4687]: I0314 10:16:02.886857 4687 generic.go:334] "Generic (PLEG): container finished" podID="90232b1d-0f84-4fe9-882f-382f3ceb76a8" containerID="c5ce01a8e2ab3ad5764f935d360abac31beda6498db5c3553a618084f9b40488" exitCode=0 Mar 14 10:16:02 crc kubenswrapper[4687]: I0314 10:16:02.886900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" event={"ID":"90232b1d-0f84-4fe9-882f-382f3ceb76a8","Type":"ContainerDied","Data":"c5ce01a8e2ab3ad5764f935d360abac31beda6498db5c3553a618084f9b40488"} Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.346243 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.377452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgdsm\" (UniqueName: \"kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm\") pod \"90232b1d-0f84-4fe9-882f-382f3ceb76a8\" (UID: \"90232b1d-0f84-4fe9-882f-382f3ceb76a8\") " Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.386644 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm" (OuterVolumeSpecName: "kube-api-access-vgdsm") pod "90232b1d-0f84-4fe9-882f-382f3ceb76a8" (UID: "90232b1d-0f84-4fe9-882f-382f3ceb76a8"). InnerVolumeSpecName "kube-api-access-vgdsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.389732 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgdsm\" (UniqueName: \"kubernetes.io/projected/90232b1d-0f84-4fe9-882f-382f3ceb76a8-kube-api-access-vgdsm\") on node \"crc\" DevicePath \"\"" Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.910898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" event={"ID":"90232b1d-0f84-4fe9-882f-382f3ceb76a8","Type":"ContainerDied","Data":"562adc07770f67a29c85be9a9e77fe8c87501193852185268f7217f8b816a101"} Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.911243 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562adc07770f67a29c85be9a9e77fe8c87501193852185268f7217f8b816a101" Mar 14 10:16:04 crc kubenswrapper[4687]: I0314 10:16:04.910974 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-bmzfv" Mar 14 10:16:05 crc kubenswrapper[4687]: I0314 10:16:05.418413 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-knl9t"] Mar 14 10:16:05 crc kubenswrapper[4687]: I0314 10:16:05.445301 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-knl9t"] Mar 14 10:16:05 crc kubenswrapper[4687]: I0314 10:16:05.749882 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa" path="/var/lib/kubelet/pods/027d7c8b-4a19-4deb-8b3e-cd3d9ebd3bfa/volumes" Mar 14 10:16:08 crc kubenswrapper[4687]: I0314 10:16:08.737007 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:16:08 crc kubenswrapper[4687]: I0314 10:16:08.737775 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:16:08 crc kubenswrapper[4687]: E0314 10:16:08.737910 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:16:08 crc kubenswrapper[4687]: E0314 10:16:08.738320 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:16:22 crc kubenswrapper[4687]: I0314 10:16:22.737624 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:16:22 crc kubenswrapper[4687]: E0314 10:16:22.738414 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:16:23 crc kubenswrapper[4687]: I0314 10:16:23.737575 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:16:23 crc kubenswrapper[4687]: E0314 10:16:23.738266 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:16:31 crc kubenswrapper[4687]: I0314 10:16:31.677944 4687 scope.go:117] "RemoveContainer" containerID="7a357a8668da03e208fb538311f4412fc5f24caed2253ccb955b3ad919dcbe97" Mar 14 10:16:35 crc kubenswrapper[4687]: I0314 10:16:35.751278 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:16:35 crc kubenswrapper[4687]: E0314 10:16:35.752692 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:16:37 crc kubenswrapper[4687]: I0314 10:16:37.736856 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:16:37 crc kubenswrapper[4687]: E0314 10:16:37.737691 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:16:41 crc kubenswrapper[4687]: I0314 10:16:41.762105 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ce12a318-4d43-442e-9621-690da5f189eb" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:16:41 crc kubenswrapper[4687]: I0314 10:16:41.763309 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ce12a318-4d43-442e-9621-690da5f189eb" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:16:49 crc kubenswrapper[4687]: I0314 10:16:49.739209 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:16:49 crc kubenswrapper[4687]: I0314 10:16:49.739759 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:16:49 crc kubenswrapper[4687]: E0314 10:16:49.739893 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:16:49 crc kubenswrapper[4687]: E0314 10:16:49.739989 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:17:02 crc kubenswrapper[4687]: I0314 10:17:02.737415 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:17:02 crc kubenswrapper[4687]: E0314 10:17:02.738418 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:17:04 crc kubenswrapper[4687]: I0314 10:17:04.737733 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:17:04 crc kubenswrapper[4687]: E0314 10:17:04.738417 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:17:15 crc kubenswrapper[4687]: I0314 10:17:15.748407 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:17:15 crc kubenswrapper[4687]: E0314 10:17:15.749317 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:17:17 crc kubenswrapper[4687]: I0314 10:17:17.737947 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:17:17 crc kubenswrapper[4687]: E0314 10:17:17.738714 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:17:24 crc kubenswrapper[4687]: I0314 10:17:24.111471 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:17:24 crc kubenswrapper[4687]: I0314 10:17:24.111966 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.090409 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sfb6d/must-gather-lkqfv"] Mar 14 10:17:28 crc kubenswrapper[4687]: E0314 10:17:28.102956 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90232b1d-0f84-4fe9-882f-382f3ceb76a8" containerName="oc" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.102974 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="90232b1d-0f84-4fe9-882f-382f3ceb76a8" containerName="oc" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.103489 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="90232b1d-0f84-4fe9-882f-382f3ceb76a8" containerName="oc" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.104561 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sfb6d/must-gather-lkqfv"] Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.104641 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.106646 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sfb6d"/"default-dockercfg-k8k55" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.107071 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sfb6d"/"openshift-service-ca.crt" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.107216 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sfb6d"/"kube-root-ca.crt" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.302839 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbwkp\" (UniqueName: \"kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.303297 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.405997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.406499 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbwkp\" (UniqueName: \"kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.406591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.424393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbwkp\" (UniqueName: \"kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp\") pod \"must-gather-lkqfv\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:28 crc kubenswrapper[4687]: I0314 10:17:28.725037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:17:29 crc kubenswrapper[4687]: I0314 10:17:29.246742 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sfb6d/must-gather-lkqfv"] Mar 14 10:17:29 crc kubenswrapper[4687]: I0314 10:17:29.324921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" event={"ID":"44d0bd02-b076-408e-987c-1394b6fb6f0d","Type":"ContainerStarted","Data":"6b6b85d5651de58aef7c1c31b5aec1d78ad717a1a0dc53df1ecc5adc960ad2d9"} Mar 14 10:17:30 crc kubenswrapper[4687]: I0314 10:17:30.737491 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:17:30 crc kubenswrapper[4687]: E0314 10:17:30.737901 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:17:31 crc kubenswrapper[4687]: I0314 10:17:31.737766 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:17:31 crc kubenswrapper[4687]: E0314 10:17:31.741849 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:17:39 crc kubenswrapper[4687]: I0314 10:17:39.443851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" event={"ID":"44d0bd02-b076-408e-987c-1394b6fb6f0d","Type":"ContainerStarted","Data":"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc"} Mar 14 10:17:40 crc kubenswrapper[4687]: I0314 10:17:40.456292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" event={"ID":"44d0bd02-b076-408e-987c-1394b6fb6f0d","Type":"ContainerStarted","Data":"5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24"} Mar 14 10:17:40 crc kubenswrapper[4687]: I0314 10:17:40.473842 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" podStartSLOduration=2.7732676720000002 podStartE2EDuration="12.473823859s" podCreationTimestamp="2026-03-14 10:17:28 +0000 UTC" firstStartedPulling="2026-03-14 10:17:29.255414392 +0000 UTC m=+4834.243654767" lastFinishedPulling="2026-03-14 10:17:38.955970539 +0000 UTC m=+4843.944210954" observedRunningTime="2026-03-14 10:17:40.471164023 +0000 UTC m=+4845.459404428" watchObservedRunningTime="2026-03-14 10:17:40.473823859 +0000 UTC m=+4845.462064234" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.421687 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-kpdhg"] Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.423572 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.545791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn99s\" (UniqueName: \"kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.545836 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.648293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn99s\" (UniqueName: \"kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.648424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.648939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.689985 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn99s\" (UniqueName: \"kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s\") pod \"crc-debug-kpdhg\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.740221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:17:45 crc kubenswrapper[4687]: I0314 10:17:45.749882 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:17:45 crc kubenswrapper[4687]: E0314 10:17:45.750109 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:17:45 crc kubenswrapper[4687]: W0314 10:17:45.782515 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod457ccc5b_b8e7_4b36_9941_a9b12eacb336.slice/crio-727cbae6b3513e355a048572411617e7670564a993ba79ebfb21194de354dbb6 WatchSource:0}: Error finding container 727cbae6b3513e355a048572411617e7670564a993ba79ebfb21194de354dbb6: Status 404 returned error can't find the container with id 727cbae6b3513e355a048572411617e7670564a993ba79ebfb21194de354dbb6 Mar 14 10:17:46 crc kubenswrapper[4687]: I0314 10:17:46.519590 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" event={"ID":"457ccc5b-b8e7-4b36-9941-a9b12eacb336","Type":"ContainerStarted","Data":"727cbae6b3513e355a048572411617e7670564a993ba79ebfb21194de354dbb6"} Mar 14 10:17:46 crc kubenswrapper[4687]: I0314 10:17:46.738268 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:17:46 crc kubenswrapper[4687]: E0314 10:17:46.738521 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:17:54 crc kubenswrapper[4687]: I0314 10:17:54.111944 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:17:54 crc kubenswrapper[4687]: I0314 10:17:54.112765 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:17:57 crc kubenswrapper[4687]: I0314 10:17:57.616881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" event={"ID":"457ccc5b-b8e7-4b36-9941-a9b12eacb336","Type":"ContainerStarted","Data":"c2cb3d912c7760e01c8d19fac36b5c4783b22fced433445966fe829beb26f228"} Mar 14 10:17:57 crc kubenswrapper[4687]: I0314 10:17:57.631521 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" podStartSLOduration=1.980173065 podStartE2EDuration="12.631465596s" podCreationTimestamp="2026-03-14 10:17:45 +0000 UTC" firstStartedPulling="2026-03-14 10:17:45.786946929 +0000 UTC m=+4850.775187304" lastFinishedPulling="2026-03-14 10:17:56.43823946 +0000 UTC m=+4861.426479835" observedRunningTime="2026-03-14 10:17:57.627704514 +0000 UTC m=+4862.615944879" watchObservedRunningTime="2026-03-14 10:17:57.631465596 +0000 UTC m=+4862.619705991" Mar 14 10:17:57 crc kubenswrapper[4687]: I0314 10:17:57.737648 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:17:57 crc kubenswrapper[4687]: E0314 10:17:57.737854 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:17:58 crc kubenswrapper[4687]: I0314 10:17:58.736961 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:17:58 crc kubenswrapper[4687]: E0314 10:17:58.737830 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.145644 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558058-skn62"] Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.147646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.149910 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.150327 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.154105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77sr\" (UniqueName: \"kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr\") pod \"auto-csr-approver-29558058-skn62\" (UID: \"fde34560-2213-45da-bebf-2ef1b97169f7\") " pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.155595 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.156674 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-skn62"] Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.255803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77sr\" (UniqueName: \"kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr\") pod \"auto-csr-approver-29558058-skn62\" (UID: \"fde34560-2213-45da-bebf-2ef1b97169f7\") " pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.274313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77sr\" (UniqueName: \"kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr\") pod \"auto-csr-approver-29558058-skn62\" (UID: \"fde34560-2213-45da-bebf-2ef1b97169f7\") " pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.467132 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:00 crc kubenswrapper[4687]: I0314 10:18:00.989628 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-skn62"] Mar 14 10:18:01 crc kubenswrapper[4687]: W0314 10:18:01.004597 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde34560_2213_45da_bebf_2ef1b97169f7.slice/crio-2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca WatchSource:0}: Error finding container 2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca: Status 404 returned error can't find the container with id 2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca Mar 14 10:18:01 crc kubenswrapper[4687]: I0314 10:18:01.679074 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-skn62" event={"ID":"fde34560-2213-45da-bebf-2ef1b97169f7","Type":"ContainerStarted","Data":"2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca"} Mar 14 10:18:04 crc kubenswrapper[4687]: I0314 10:18:04.705013 4687 generic.go:334] "Generic (PLEG): container finished" podID="fde34560-2213-45da-bebf-2ef1b97169f7" containerID="99149720dafaa0e437d905f4b99d5d59ab8c8a449a0f19ec762adafc0308c22a" exitCode=0 Mar 14 10:18:04 crc kubenswrapper[4687]: I0314 10:18:04.705064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-skn62" event={"ID":"fde34560-2213-45da-bebf-2ef1b97169f7","Type":"ContainerDied","Data":"99149720dafaa0e437d905f4b99d5d59ab8c8a449a0f19ec762adafc0308c22a"} Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.094958 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.277479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z77sr\" (UniqueName: \"kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr\") pod \"fde34560-2213-45da-bebf-2ef1b97169f7\" (UID: \"fde34560-2213-45da-bebf-2ef1b97169f7\") " Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.291166 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr" (OuterVolumeSpecName: "kube-api-access-z77sr") pod "fde34560-2213-45da-bebf-2ef1b97169f7" (UID: "fde34560-2213-45da-bebf-2ef1b97169f7"). InnerVolumeSpecName "kube-api-access-z77sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.381147 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z77sr\" (UniqueName: \"kubernetes.io/projected/fde34560-2213-45da-bebf-2ef1b97169f7-kube-api-access-z77sr\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.726030 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-skn62" event={"ID":"fde34560-2213-45da-bebf-2ef1b97169f7","Type":"ContainerDied","Data":"2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca"} Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.726408 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa443e1a69956f6e98c658967dd052ca578982a2355e117baa268b6ddf160ca" Mar 14 10:18:06 crc kubenswrapper[4687]: I0314 10:18:06.726121 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-skn62" Mar 14 10:18:07 crc kubenswrapper[4687]: I0314 10:18:07.177265 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-mm5bk"] Mar 14 10:18:07 crc kubenswrapper[4687]: I0314 10:18:07.189976 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-mm5bk"] Mar 14 10:18:07 crc kubenswrapper[4687]: I0314 10:18:07.748513 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be964cc5-c7af-4035-9d4b-c2d6aaecbcc8" path="/var/lib/kubelet/pods/be964cc5-c7af-4035-9d4b-c2d6aaecbcc8/volumes" Mar 14 10:18:09 crc kubenswrapper[4687]: I0314 10:18:09.737388 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:18:09 crc kubenswrapper[4687]: E0314 10:18:09.737803 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:18:11 crc kubenswrapper[4687]: I0314 10:18:11.737378 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:18:11 crc kubenswrapper[4687]: E0314 10:18:11.737900 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:18:12 crc kubenswrapper[4687]: I0314 10:18:12.782003 4687 generic.go:334] "Generic (PLEG): container finished" podID="457ccc5b-b8e7-4b36-9941-a9b12eacb336" containerID="c2cb3d912c7760e01c8d19fac36b5c4783b22fced433445966fe829beb26f228" exitCode=0 Mar 14 10:18:12 crc kubenswrapper[4687]: I0314 10:18:12.782046 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" event={"ID":"457ccc5b-b8e7-4b36-9941-a9b12eacb336","Type":"ContainerDied","Data":"c2cb3d912c7760e01c8d19fac36b5c4783b22fced433445966fe829beb26f228"} Mar 14 10:18:13 crc kubenswrapper[4687]: I0314 10:18:13.891837 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:18:13 crc kubenswrapper[4687]: I0314 10:18:13.925906 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-kpdhg"] Mar 14 10:18:13 crc kubenswrapper[4687]: I0314 10:18:13.934848 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-kpdhg"] Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.034182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host\") pod \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.034699 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn99s\" (UniqueName: \"kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s\") pod \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\" (UID: \"457ccc5b-b8e7-4b36-9941-a9b12eacb336\") " Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.034870 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host" (OuterVolumeSpecName: "host") pod "457ccc5b-b8e7-4b36-9941-a9b12eacb336" (UID: "457ccc5b-b8e7-4b36-9941-a9b12eacb336"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.035294 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457ccc5b-b8e7-4b36-9941-a9b12eacb336-host\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.040859 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s" (OuterVolumeSpecName: "kube-api-access-rn99s") pod "457ccc5b-b8e7-4b36-9941-a9b12eacb336" (UID: "457ccc5b-b8e7-4b36-9941-a9b12eacb336"). InnerVolumeSpecName "kube-api-access-rn99s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.137174 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn99s\" (UniqueName: \"kubernetes.io/projected/457ccc5b-b8e7-4b36-9941-a9b12eacb336-kube-api-access-rn99s\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.799931 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="727cbae6b3513e355a048572411617e7670564a993ba79ebfb21194de354dbb6" Mar 14 10:18:14 crc kubenswrapper[4687]: I0314 10:18:14.799982 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-kpdhg" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.074939 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-9qmj7"] Mar 14 10:18:15 crc kubenswrapper[4687]: E0314 10:18:15.075470 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde34560-2213-45da-bebf-2ef1b97169f7" containerName="oc" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.075487 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde34560-2213-45da-bebf-2ef1b97169f7" containerName="oc" Mar 14 10:18:15 crc kubenswrapper[4687]: E0314 10:18:15.075530 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457ccc5b-b8e7-4b36-9941-a9b12eacb336" containerName="container-00" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.075540 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="457ccc5b-b8e7-4b36-9941-a9b12eacb336" containerName="container-00" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.075754 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="457ccc5b-b8e7-4b36-9941-a9b12eacb336" containerName="container-00" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.075767 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde34560-2213-45da-bebf-2ef1b97169f7" containerName="oc" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.076461 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.157437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72pb\" (UniqueName: \"kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.157534 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.259050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.259236 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.259248 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72pb\" (UniqueName: \"kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.286619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72pb\" (UniqueName: \"kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb\") pod \"crc-debug-9qmj7\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.392664 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:15 crc kubenswrapper[4687]: W0314 10:18:15.433562 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecae2ad_f1cf_4d57_b6aa_17f9094c6b10.slice/crio-98779f4e0aa473d3327a378ae2f6aada8ef01e3ffc122369db6de584d65a084c WatchSource:0}: Error finding container 98779f4e0aa473d3327a378ae2f6aada8ef01e3ffc122369db6de584d65a084c: Status 404 returned error can't find the container with id 98779f4e0aa473d3327a378ae2f6aada8ef01e3ffc122369db6de584d65a084c Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.747570 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457ccc5b-b8e7-4b36-9941-a9b12eacb336" path="/var/lib/kubelet/pods/457ccc5b-b8e7-4b36-9941-a9b12eacb336/volumes" Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.810204 4687 generic.go:334] "Generic (PLEG): container finished" podID="9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" containerID="ec6d391214c81e31c00f5a4268811889bfa69bdd3fc2663742152ddf670b7625" exitCode=1 Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.810286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" event={"ID":"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10","Type":"ContainerDied","Data":"ec6d391214c81e31c00f5a4268811889bfa69bdd3fc2663742152ddf670b7625"} Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.810516 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" event={"ID":"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10","Type":"ContainerStarted","Data":"98779f4e0aa473d3327a378ae2f6aada8ef01e3ffc122369db6de584d65a084c"} Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.856527 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-9qmj7"] Mar 14 10:18:15 crc kubenswrapper[4687]: I0314 10:18:15.866954 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sfb6d/crc-debug-9qmj7"] Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.093053 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.194718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host\") pod \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.194822 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r72pb\" (UniqueName: \"kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb\") pod \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\" (UID: \"9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10\") " Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.194842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host" (OuterVolumeSpecName: "host") pod "9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" (UID: "9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.195452 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-host\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.200562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb" (OuterVolumeSpecName: "kube-api-access-r72pb") pod "9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" (UID: "9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10"). InnerVolumeSpecName "kube-api-access-r72pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.297291 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r72pb\" (UniqueName: \"kubernetes.io/projected/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10-kube-api-access-r72pb\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.754813 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" path="/var/lib/kubelet/pods/9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10/volumes" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.832185 4687 scope.go:117] "RemoveContainer" containerID="ec6d391214c81e31c00f5a4268811889bfa69bdd3fc2663742152ddf670b7625" Mar 14 10:18:17 crc kubenswrapper[4687]: I0314 10:18:17.832230 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/crc-debug-9qmj7" Mar 14 10:18:20 crc kubenswrapper[4687]: I0314 10:18:20.737261 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:18:20 crc kubenswrapper[4687]: E0314 10:18:20.737703 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:18:22 crc kubenswrapper[4687]: I0314 10:18:22.737431 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:18:22 crc kubenswrapper[4687]: E0314 10:18:22.738111 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.111038 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.111584 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.111643 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.112791 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.112881 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3" gracePeriod=600 Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.909109 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3" exitCode=0 Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.909286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3"} Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.909639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec"} Mar 14 10:18:24 crc kubenswrapper[4687]: I0314 10:18:24.909669 4687 scope.go:117] "RemoveContainer" containerID="86f8fa1f5919185361caca4295bc9016d5050d031fb5086e907ba5a8cd185b3c" Mar 14 10:18:31 crc kubenswrapper[4687]: I0314 10:18:31.817606 4687 scope.go:117] "RemoveContainer" containerID="0ea0a231dcd227d252e3c2fa79aff81b20409a8902b1d63d70a416e74a355acc" Mar 14 10:18:32 crc kubenswrapper[4687]: I0314 10:18:32.737986 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:18:33 crc kubenswrapper[4687]: I0314 10:18:33.001266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888"} Mar 14 10:18:37 crc kubenswrapper[4687]: I0314 10:18:37.737397 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:18:37 crc kubenswrapper[4687]: E0314 10:18:37.738396 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.089909 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" exitCode=1 Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.089971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888"} Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.090596 4687 scope.go:117] "RemoveContainer" containerID="8f9a187be9394ab7ff2e8cb23a1e1dc3492dd851a0f41612e748019f6960d9a7" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.091370 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:18:42 crc kubenswrapper[4687]: E0314 10:18:42.091662 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.219658 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.219749 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.219770 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:18:42 crc kubenswrapper[4687]: I0314 10:18:42.219786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:18:43 crc kubenswrapper[4687]: I0314 10:18:43.104118 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:18:43 crc kubenswrapper[4687]: E0314 10:18:43.104845 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:18:52 crc kubenswrapper[4687]: I0314 10:18:52.737764 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:18:53 crc kubenswrapper[4687]: I0314 10:18:53.195873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23"} Mar 14 10:18:53 crc kubenswrapper[4687]: I0314 10:18:53.737257 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:18:53 crc kubenswrapper[4687]: E0314 10:18:53.737786 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:01 crc kubenswrapper[4687]: I0314 10:19:01.263207 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" exitCode=1 Mar 14 10:19:01 crc kubenswrapper[4687]: I0314 10:19:01.263303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23"} Mar 14 10:19:01 crc kubenswrapper[4687]: I0314 10:19:01.263851 4687 scope.go:117] "RemoveContainer" containerID="c9f895739e0f1e0a3950cfd4b9f735f3aa03ce4c08a02c099a993c5fd8f50f5a" Mar 14 10:19:01 crc kubenswrapper[4687]: I0314 10:19:01.267158 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:01 crc kubenswrapper[4687]: E0314 10:19:01.267946 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:02 crc kubenswrapper[4687]: I0314 10:19:02.128530 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:19:02 crc kubenswrapper[4687]: I0314 10:19:02.128901 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:19:02 crc kubenswrapper[4687]: I0314 10:19:02.128917 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:19:02 crc kubenswrapper[4687]: I0314 10:19:02.128931 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:19:02 crc kubenswrapper[4687]: I0314 10:19:02.275388 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:02 crc kubenswrapper[4687]: E0314 10:19:02.275610 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:06 crc kubenswrapper[4687]: I0314 10:19:06.737081 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:19:06 crc kubenswrapper[4687]: E0314 10:19:06.737827 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:12 crc kubenswrapper[4687]: I0314 10:19:12.738934 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:12 crc kubenswrapper[4687]: E0314 10:19:12.739618 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.213447 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-966dfd5fd-rdsjp_165379e3-6caa-4b42-b61a-ef153a72f7d2/barbican-api/0.log" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.402905 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-966dfd5fd-rdsjp_165379e3-6caa-4b42-b61a-ef153a72f7d2/barbican-api-log/0.log" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.446302 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d74b7768-djqrt_b0127f00-aece-46a2-86ea-42ce2ee74619/barbican-keystone-listener/0.log" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.573594 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d74b7768-djqrt_b0127f00-aece-46a2-86ea-42ce2ee74619/barbican-keystone-listener-log/0.log" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.825080 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cb9cbbf5c-2kkj5_215e8fbe-617a-4af4-9ba2-497e59e04008/barbican-worker/0.log" Mar 14 10:19:15 crc kubenswrapper[4687]: I0314 10:19:15.898489 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cb9cbbf5c-2kkj5_215e8fbe-617a-4af4-9ba2-497e59e04008/barbican-worker-log/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.015899 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:16 crc kubenswrapper[4687]: E0314 10:19:16.016278 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" containerName="container-00" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.016295 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" containerName="container-00" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.016506 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecae2ad-f1cf-4d57-b6aa-17f9094c6b10" containerName="container-00" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.017873 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.029557 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.076607 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef/ceilometer-central-agent/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.082216 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.082287 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.082313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlww\" (UniqueName: \"kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.115942 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef/ceilometer-notification-agent/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.168998 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef/proxy-httpd/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.180821 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1e84c843-c7ab-4f8a-82fc-ffe1464cc1ef/sg-core/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.183689 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.183744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.183768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlww\" (UniqueName: \"kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.184153 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.184251 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.208554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlww\" (UniqueName: \"kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww\") pod \"community-operators-b5jtx\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.346308 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.579675 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f02a356-a61c-43f4-af36-acbb8e11e187/cinder-api-log/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.786912 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f02a356-a61c-43f4-af36-acbb8e11e187/cinder-api/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.829731 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4abc7dfc-bc68-41b0-ba51-175e0febcc3b/cinder-scheduler/0.log" Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.849591 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:16 crc kubenswrapper[4687]: I0314 10:19:16.997542 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4abc7dfc-bc68-41b0-ba51-175e0febcc3b/probe/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.064770 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85bc458d95-5dpkr_7f4b5e55-374f-464c-b584-89cd9ea5a2d6/init/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.215166 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85bc458d95-5dpkr_7f4b5e55-374f-464c-b584-89cd9ea5a2d6/dnsmasq-dns/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.248004 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85bc458d95-5dpkr_7f4b5e55-374f-464c-b584-89cd9ea5a2d6/init/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.251877 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40393804-849f-448b-a65e-39e17e2f84cb/glance-httpd/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.416854 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40393804-849f-448b-a65e-39e17e2f84cb/glance-log/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.429444 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerID="69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2" exitCode=0 Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.429489 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerDied","Data":"69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2"} Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.429515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerStarted","Data":"28f33785fb0d1c8191d1fa9356fa7bb745be7564b6048f1ef563fe812d445911"} Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.469452 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cf949dbe-ea29-420f-8e2c-ae02a1980bb9/glance-log/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.505625 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cf949dbe-ea29-420f-8e2c-ae02a1980bb9/glance-httpd/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.700762 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74f987fc4-zw2rw_a89460b9-5c8a-4000-ac6a-6202699a10d1/horizon/16.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.740874 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:19:17 crc kubenswrapper[4687]: E0314 10:19:17.741156 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.741401 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74f987fc4-zw2rw_a89460b9-5c8a-4000-ac6a-6202699a10d1/horizon-log/0.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.782799 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74f987fc4-zw2rw_a89460b9-5c8a-4000-ac6a-6202699a10d1/horizon/16.log" Mar 14 10:19:17 crc kubenswrapper[4687]: I0314 10:19:17.985733 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dcd9ff5b-bprxd_00a62493-95c1-4765-8b9e-4188b68c587c/horizon-log/0.log" Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.010654 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dcd9ff5b-bprxd_00a62493-95c1-4765-8b9e-4188b68c587c/horizon/16.log" Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.047830 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dcd9ff5b-bprxd_00a62493-95c1-4765-8b9e-4188b68c587c/horizon/16.log" Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.290690 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29558041-tgrn9_94953a0f-cefa-459d-9beb-b73414626765/keystone-cron/0.log" Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.300642 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7978d574c6-8llvn_5046ed05-72aa-4064-a9fd-940663b5844b/keystone-api/0.log" Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.439393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerStarted","Data":"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced"} Mar 14 10:19:18 crc kubenswrapper[4687]: I0314 10:19:18.492776 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4a2d5720-4d32-4320-abf4-18c7a4d70e33/kube-state-metrics/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.063971 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6ff6c58d89-bss4w_61b16c55-e6d0-4d0c-b2df-d6940fc67dd8/neutron-httpd/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.464516 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_83405408-9b65-42fd-955c-952cad220093/setup-container/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.470723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerDied","Data":"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced"} Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.470692 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerID="87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced" exitCode=0 Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.484411 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_83405408-9b65-42fd-955c-952cad220093/setup-container/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.499781 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_83405408-9b65-42fd-955c-952cad220093/rabbitmq/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.574286 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6ff6c58d89-bss4w_61b16c55-e6d0-4d0c-b2df-d6940fc67dd8/neutron-api/0.log" Mar 14 10:19:19 crc kubenswrapper[4687]: I0314 10:19:19.982639 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_47f5a5c0-8928-4d3e-9098-a3338401c52e/nova-api-log/0.log" Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.089500 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_84918aa9-4677-4b9d-8cf6-e4fc0ace5144/nova-cell0-conductor-conductor/0.log" Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.293612 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f7269d07-db77-4395-92c2-642b7237ae80/nova-cell1-conductor-conductor/0.log" Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.403485 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3455a184-abb4-4642-888d-b5c9ba36b999/nova-cell1-novncproxy-novncproxy/0.log" Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.454944 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_47f5a5c0-8928-4d3e-9098-a3338401c52e/nova-api-api/0.log" Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.480932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerStarted","Data":"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf"} Mar 14 10:19:20 crc kubenswrapper[4687]: I0314 10:19:20.504305 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5jtx" podStartSLOduration=3.094877249 podStartE2EDuration="5.504285106s" podCreationTimestamp="2026-03-14 10:19:15 +0000 UTC" firstStartedPulling="2026-03-14 10:19:17.43422802 +0000 UTC m=+4942.422468405" lastFinishedPulling="2026-03-14 10:19:19.843635887 +0000 UTC m=+4944.831876262" observedRunningTime="2026-03-14 10:19:20.495770636 +0000 UTC m=+4945.484011011" watchObservedRunningTime="2026-03-14 10:19:20.504285106 +0000 UTC m=+4945.492525471" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.228365 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96f62246-5351-422e-8e28-fe9926c7dd39/nova-metadata-log/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.493696 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3a5150eb-8e4a-4162-b122-602614d01773/nova-scheduler-scheduler/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.497091 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce12a318-4d43-442e-9621-690da5f189eb/mysql-bootstrap/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.637981 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96f62246-5351-422e-8e28-fe9926c7dd39/nova-metadata-metadata/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.684438 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce12a318-4d43-442e-9621-690da5f189eb/mysql-bootstrap/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.741189 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce12a318-4d43-442e-9621-690da5f189eb/galera/0.log" Mar 14 10:19:21 crc kubenswrapper[4687]: I0314 10:19:21.936119 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cd37e7dc-9797-42c5-865f-832412233c32/mysql-bootstrap/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.082116 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cd37e7dc-9797-42c5-865f-832412233c32/mysql-bootstrap/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.093792 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cd37e7dc-9797-42c5-865f-832412233c32/galera/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.140531 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a285a7f0-8991-4a29-a2b0-2c31bcba7433/openstackclient/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.293842 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nm5b2_3a8d425d-cf87-4065-aaf4-8633e9387048/openstack-network-exporter/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.446647 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sfclc_af0f78a2-052f-428c-8b71-425a477a00bd/ovsdb-server-init/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.595145 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sfclc_af0f78a2-052f-428c-8b71-425a477a00bd/ovs-vswitchd/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.597036 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sfclc_af0f78a2-052f-428c-8b71-425a477a00bd/ovsdb-server-init/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.676721 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sfclc_af0f78a2-052f-428c-8b71-425a477a00bd/ovsdb-server/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.791824 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s5czb_5f410ca3-8151-42b5-9250-837b9444eb7e/ovn-controller/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.872219 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab26cdb3-69d2-4c25-8b3f-e46d5c866df9/openstack-network-exporter/0.log" Mar 14 10:19:22 crc kubenswrapper[4687]: I0314 10:19:22.980771 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab26cdb3-69d2-4c25-8b3f-e46d5c866df9/ovn-northd/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.055100 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79f508ad-6f43-451b-b4f2-2250754b5b1c/openstack-network-exporter/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.094586 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79f508ad-6f43-451b-b4f2-2250754b5b1c/ovsdbserver-nb/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.262014 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_95b6741a-6d2f-45c7-81a1-4e254e9e23f8/openstack-network-exporter/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.286899 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_95b6741a-6d2f-45c7-81a1-4e254e9e23f8/ovsdbserver-sb/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.495922 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c699fbccb-rdl22_e3088e0c-b79c-41f3-9fc6-ef0d797943e0/placement-api/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.620672 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c699fbccb-rdl22_e3088e0c-b79c-41f3-9fc6-ef0d797943e0/placement-log/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.622785 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_40c58540-7bfb-429a-bce0-2231dffb158e/init-config-reloader/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.736627 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:23 crc kubenswrapper[4687]: E0314 10:19:23.736938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.763535 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_40c58540-7bfb-429a-bce0-2231dffb158e/config-reloader/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.783682 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_40c58540-7bfb-429a-bce0-2231dffb158e/init-config-reloader/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.851890 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_40c58540-7bfb-429a-bce0-2231dffb158e/thanos-sidecar/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.875747 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_40c58540-7bfb-429a-bce0-2231dffb158e/prometheus/0.log" Mar 14 10:19:23 crc kubenswrapper[4687]: I0314 10:19:23.983553 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4bc54e55-6120-453d-8955-b7f478318618/setup-container/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.214496 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4bc54e55-6120-453d-8955-b7f478318618/rabbitmq/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.228360 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6785aec9-5237-4c55-9ec3-1d8783495b3a/setup-container/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.229122 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4bc54e55-6120-453d-8955-b7f478318618/setup-container/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.457742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6785aec9-5237-4c55-9ec3-1d8783495b3a/setup-container/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.515565 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6785aec9-5237-4c55-9ec3-1d8783495b3a/rabbitmq/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.627219 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d6889c85c-hl9hs_167eccc2-08cb-4683-a74b-360da7bfb902/proxy-httpd/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.696492 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d6889c85c-hl9hs_167eccc2-08cb-4683-a74b-360da7bfb902/proxy-server/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.724015 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jnnjj_da2793f1-2651-4b4b-ad8c-d7f99e012e42/swift-ring-rebalance/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.944907 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/account-auditor/0.log" Mar 14 10:19:24 crc kubenswrapper[4687]: I0314 10:19:24.969258 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/account-reaper/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.037812 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/account-replicator/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.087027 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/account-server/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.161170 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/container-auditor/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.234950 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/container-server/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.251736 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/container-replicator/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.349741 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/container-updater/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.382619 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/object-auditor/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.407749 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/object-expirer/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.548610 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/object-replicator/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.589346 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/object-server/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.615282 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/object-updater/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.632000 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/rsync/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.822005 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fa2b161d-a32b-4bb8-b947-455a1f17aa59/swift-recon-cron/0.log" Mar 14 10:19:25 crc kubenswrapper[4687]: I0314 10:19:25.984004 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_61680d0d-2191-4534-bdaf-0032b9ebe805/watcher-api-log/0.log" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.188090 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_41e97271-7c86-445b-9af3-4ff4d74a8c84/watcher-applier/0.log" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.346454 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.347161 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.390512 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.599752 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.606574 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_593ae32b-34ff-4ffa-a6b5-e636cf2afd0e/watcher-decision-engine/0.log" Mar 14 10:19:26 crc kubenswrapper[4687]: I0314 10:19:26.646990 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:28 crc kubenswrapper[4687]: I0314 10:19:28.363526 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_61680d0d-2191-4534-bdaf-0032b9ebe805/watcher-api/0.log" Mar 14 10:19:28 crc kubenswrapper[4687]: I0314 10:19:28.549961 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5jtx" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="registry-server" containerID="cri-o://86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf" gracePeriod=2 Mar 14 10:19:28 crc kubenswrapper[4687]: I0314 10:19:28.737152 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:19:28 crc kubenswrapper[4687]: E0314 10:19:28.737390 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.116195 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.216450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content\") pod \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.216501 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghlww\" (UniqueName: \"kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww\") pod \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.216572 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities\") pod \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\" (UID: \"7dd63dc6-cb9c-4e90-84ff-ae461695d76a\") " Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.218925 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities" (OuterVolumeSpecName: "utilities") pod "7dd63dc6-cb9c-4e90-84ff-ae461695d76a" (UID: "7dd63dc6-cb9c-4e90-84ff-ae461695d76a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.223954 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww" (OuterVolumeSpecName: "kube-api-access-ghlww") pod "7dd63dc6-cb9c-4e90-84ff-ae461695d76a" (UID: "7dd63dc6-cb9c-4e90-84ff-ae461695d76a"). InnerVolumeSpecName "kube-api-access-ghlww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.268301 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dd63dc6-cb9c-4e90-84ff-ae461695d76a" (UID: "7dd63dc6-cb9c-4e90-84ff-ae461695d76a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.320714 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.320746 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghlww\" (UniqueName: \"kubernetes.io/projected/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-kube-api-access-ghlww\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.320756 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd63dc6-cb9c-4e90-84ff-ae461695d76a-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.562370 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerID="86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf" exitCode=0 Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.562412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerDied","Data":"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf"} Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.562661 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5jtx" event={"ID":"7dd63dc6-cb9c-4e90-84ff-ae461695d76a","Type":"ContainerDied","Data":"28f33785fb0d1c8191d1fa9356fa7bb745be7564b6048f1ef563fe812d445911"} Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.562689 4687 scope.go:117] "RemoveContainer" containerID="86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.562436 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5jtx" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.583514 4687 scope.go:117] "RemoveContainer" containerID="87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced" Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.605387 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.609950 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5jtx"] Mar 14 10:19:29 crc kubenswrapper[4687]: I0314 10:19:29.748317 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" path="/var/lib/kubelet/pods/7dd63dc6-cb9c-4e90-84ff-ae461695d76a/volumes" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.095054 4687 scope.go:117] "RemoveContainer" containerID="69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.143453 4687 scope.go:117] "RemoveContainer" containerID="86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf" Mar 14 10:19:30 crc kubenswrapper[4687]: E0314 10:19:30.147646 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf\": container with ID starting with 86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf not found: ID does not exist" containerID="86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.147698 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf"} err="failed to get container status \"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf\": rpc error: code = NotFound desc = could not find container \"86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf\": container with ID starting with 86e92641c2648acae465f1fd4160be1d291a62a1a9a5f085842373aa2c84b6cf not found: ID does not exist" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.147731 4687 scope.go:117] "RemoveContainer" containerID="87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced" Mar 14 10:19:30 crc kubenswrapper[4687]: E0314 10:19:30.149997 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced\": container with ID starting with 87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced not found: ID does not exist" containerID="87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.150041 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced"} err="failed to get container status \"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced\": rpc error: code = NotFound desc = could not find container \"87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced\": container with ID starting with 87b01d4a0c3ebe1325bde0f3301567c2ff02408947d0d66d480f04f159d90ced not found: ID does not exist" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.150068 4687 scope.go:117] "RemoveContainer" containerID="69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2" Mar 14 10:19:30 crc kubenswrapper[4687]: E0314 10:19:30.150459 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2\": container with ID starting with 69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2 not found: ID does not exist" containerID="69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2" Mar 14 10:19:30 crc kubenswrapper[4687]: I0314 10:19:30.150523 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2"} err="failed to get container status \"69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2\": rpc error: code = NotFound desc = could not find container \"69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2\": container with ID starting with 69c91e3ac9e11900a3de318427cbbb6d78b3961305b33be4a2adf24525d00ad2 not found: ID does not exist" Mar 14 10:19:34 crc kubenswrapper[4687]: I0314 10:19:34.636865 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_59923589-5501-43dc-af74-eb7006f6c427/memcached/0.log" Mar 14 10:19:35 crc kubenswrapper[4687]: I0314 10:19:35.746201 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:35 crc kubenswrapper[4687]: E0314 10:19:35.746811 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:39 crc kubenswrapper[4687]: I0314 10:19:39.736807 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:19:39 crc kubenswrapper[4687]: E0314 10:19:39.737621 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:47 crc kubenswrapper[4687]: I0314 10:19:47.737126 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:47 crc kubenswrapper[4687]: E0314 10:19:47.737809 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:51 crc kubenswrapper[4687]: I0314 10:19:51.737715 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:19:51 crc kubenswrapper[4687]: E0314 10:19:51.738932 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.840504 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:19:52 crc kubenswrapper[4687]: E0314 10:19:52.841603 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="extract-content" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.841635 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="extract-content" Mar 14 10:19:52 crc kubenswrapper[4687]: E0314 10:19:52.841681 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="extract-utilities" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.841696 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="extract-utilities" Mar 14 10:19:52 crc kubenswrapper[4687]: E0314 10:19:52.841731 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="registry-server" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.841745 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="registry-server" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.842177 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd63dc6-cb9c-4e90-84ff-ae461695d76a" containerName="registry-server" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.845110 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.874590 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.944206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.944361 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:52 crc kubenswrapper[4687]: I0314 10:19:52.944570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrwg\" (UniqueName: \"kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.046581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csrwg\" (UniqueName: \"kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.046706 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.046748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.047278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.047287 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.088357 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csrwg\" (UniqueName: \"kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg\") pod \"redhat-marketplace-mhxnc\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.172968 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.690681 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:19:53 crc kubenswrapper[4687]: I0314 10:19:53.824077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerStarted","Data":"f029a360fd0e17cb244bcadaef6d92ffca9662d653003d07635b9a984e20a194"} Mar 14 10:19:54 crc kubenswrapper[4687]: I0314 10:19:54.835804 4687 generic.go:334] "Generic (PLEG): container finished" podID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerID="8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219" exitCode=0 Mar 14 10:19:54 crc kubenswrapper[4687]: I0314 10:19:54.836096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerDied","Data":"8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219"} Mar 14 10:19:55 crc kubenswrapper[4687]: I0314 10:19:55.813700 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/util/0.log" Mar 14 10:19:55 crc kubenswrapper[4687]: I0314 10:19:55.846412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerStarted","Data":"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4"} Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.016270 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/pull/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.024390 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/util/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.075515 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/pull/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.268391 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/util/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.343089 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/pull/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.413078 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b2fc087f8a76dbce8476595fc384d33288943d0097430396a1362c765ec6552_083ec994-abcc-49d0-a79d-5b2a54a1ab00/extract/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.534492 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-d47688694-v6wjr_52b416da-f3c4-43f1-a91a-14dac5c1cf25/manager/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.772100 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-bb2q4_56b6df44-ea65-46b4-93da-67d70a3769b1/manager/0.log" Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.857131 4687 generic.go:334] "Generic (PLEG): container finished" podID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerID="6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4" exitCode=0 Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.857172 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerDied","Data":"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4"} Mar 14 10:19:56 crc kubenswrapper[4687]: I0314 10:19:56.980026 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-rblcv_6de94ed3-f1ea-4cb3-88a7-c78a0af38830/manager/0.log" Mar 14 10:19:57 crc kubenswrapper[4687]: I0314 10:19:57.084124 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-426nr_bc39ee13-24c1-4c4d-9aed-3ce11c3eceb9/manager/0.log" Mar 14 10:19:57 crc kubenswrapper[4687]: I0314 10:19:57.321879 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-pttvx_40a7f8a7-9a5c-4607-a685-747c0fc779b5/manager/0.log" Mar 14 10:19:57 crc kubenswrapper[4687]: I0314 10:19:57.571839 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bc894d9b-s6578_44cc6d95-7c97-4e12-a369-4595f9a540cd/manager/0.log" Mar 14 10:19:57 crc kubenswrapper[4687]: I0314 10:19:57.880126 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerStarted","Data":"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970"} Mar 14 10:19:57 crc kubenswrapper[4687]: I0314 10:19:57.912795 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhxnc" podStartSLOduration=3.4238508 podStartE2EDuration="5.912766313s" podCreationTimestamp="2026-03-14 10:19:52 +0000 UTC" firstStartedPulling="2026-03-14 10:19:54.838658479 +0000 UTC m=+4979.826898844" lastFinishedPulling="2026-03-14 10:19:57.327573982 +0000 UTC m=+4982.315814357" observedRunningTime="2026-03-14 10:19:57.906320755 +0000 UTC m=+4982.894561140" watchObservedRunningTime="2026-03-14 10:19:57.912766313 +0000 UTC m=+4982.901006688" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.009381 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-cqznw_59360bb4-cf7e-41a7-b78e-0615b4cd15e4/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.023038 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54dc5b8f8d-m8hhf_9a2eadbd-233b-49a0-b869-de204e01663c/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.348115 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-57b484b4df-nbctg_18179c6b-b84c-4bd5-b077-fc8c8689e12f/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.357872 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-wghgn_e76c5b65-d7d1-4986-90fe-dab7724bc142/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.428836 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5b6b6b4c9f-2q8x2_481cbeed-e6fd-4afe-a6af-043a6a06a521/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.652266 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-nsm5m_c898928b-40b3-46d7-87e7-dfd483949ed2/manager/0.log" Mar 14 10:19:58 crc kubenswrapper[4687]: I0314 10:19:58.710862 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f84474648-htnfl_b32556c3-5b91-4f2d-8f1f-6a1b2eae0367/manager/0.log" Mar 14 10:19:59 crc kubenswrapper[4687]: I0314 10:19:59.312591 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-9cbr4_138d02d6-70f7-4310-b250-2756a22333b5/manager/0.log" Mar 14 10:19:59 crc kubenswrapper[4687]: I0314 10:19:59.369099 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7mvbw4_faf0ec50-97de-40e8-9e7e-c407f08e2de6/manager/0.log" Mar 14 10:19:59 crc kubenswrapper[4687]: I0314 10:19:59.737726 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:19:59 crc kubenswrapper[4687]: E0314 10:19:59.737976 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:19:59 crc kubenswrapper[4687]: I0314 10:19:59.833958 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bdb46c895-chnbd_ac2cf39c-8fde-45de-858d-c8c9a8a572a1/operator/0.log" Mar 14 10:19:59 crc kubenswrapper[4687]: I0314 10:19:59.928386 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-762kl_11a5a905-c530-43e0-87db-4437b61ed3da/registry-server/0.log" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.151513 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558060-tt4nq"] Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.153482 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.158547 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.158753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.158874 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.163887 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-tt4nq"] Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.254236 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-lf9b6_79c0c137-135c-49a1-bd73-20e6325ca1e6/manager/0.log" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.299847 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5lb\" (UniqueName: \"kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb\") pod \"auto-csr-approver-29558060-tt4nq\" (UID: \"de177a96-56d3-4450-9e34-ab28e17b5fd1\") " pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.363019 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-gb8wb_5a1032a1-2d67-404d-8461-c84ab72bd2a3/manager/0.log" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.401238 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w5lb\" (UniqueName: \"kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb\") pod \"auto-csr-approver-29558060-tt4nq\" (UID: \"de177a96-56d3-4450-9e34-ab28e17b5fd1\") " pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.434805 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-j7pgq_50752c5a-72da-4aa5-838e-1b8f7d3ccb04/operator/0.log" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.762150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w5lb\" (UniqueName: \"kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb\") pod \"auto-csr-approver-29558060-tt4nq\" (UID: \"de177a96-56d3-4450-9e34-ab28e17b5fd1\") " pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.798223 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:00 crc kubenswrapper[4687]: I0314 10:20:00.843277 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59cdb7596d-w9jwl_b608e71f-c0d4-463a-ba9f-a6becc4f54b6/manager/0.log" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.254407 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-tt4nq"] Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.401160 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-bwzgn_ace7279e-00c3-42fa-8dfd-ff8f3256c6f0/manager/0.log" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.433364 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7f9cc5dd44-t78v7_67b29df1-eced-4bfd-9c8c-24d56f0f880c/manager/0.log" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.724092 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6854b8b9d9-rmdj4_6dea1f44-0092-43b8-9576-b8b64b08d923/manager/0.log" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.735158 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.738128 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.771212 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.835403 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llr8p\" (UniqueName: \"kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.835810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.835962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.839405 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7b8d757b5d-r8fgz_19eab663-f7c2-4a2a-923c-a5806353c911/manager/0.log" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.925245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" event={"ID":"de177a96-56d3-4450-9e34-ab28e17b5fd1","Type":"ContainerStarted","Data":"527e26468629b2311d419993918f22f6f3544f219266d7e8873cfc5a260509c2"} Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.938308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.938772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.938938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llr8p\" (UniqueName: \"kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.939174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.938829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:01 crc kubenswrapper[4687]: I0314 10:20:01.977860 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llr8p\" (UniqueName: \"kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p\") pod \"redhat-operators-255qg\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.065979 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.642228 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.736525 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:20:02 crc kubenswrapper[4687]: E0314 10:20:02.736900 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.934450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" event={"ID":"de177a96-56d3-4450-9e34-ab28e17b5fd1","Type":"ContainerStarted","Data":"2ab6e0f4c9fdab20b2e251117651cec7ddf2261a454f9a2cf15e1ede90d1fa9e"} Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.937035 4687 generic.go:334] "Generic (PLEG): container finished" podID="a826f790-0937-47c8-a095-4d0ed3baa371" containerID="27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91" exitCode=0 Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.937080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerDied","Data":"27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91"} Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.937109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerStarted","Data":"31ca459d9fa6f593b63a38c3c4e66ce9d26997601edf0f91bc52e918f53ca197"} Mar 14 10:20:02 crc kubenswrapper[4687]: I0314 10:20:02.953687 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" podStartSLOduration=1.9591732720000001 podStartE2EDuration="2.95366842s" podCreationTimestamp="2026-03-14 10:20:00 +0000 UTC" firstStartedPulling="2026-03-14 10:20:01.29471231 +0000 UTC m=+4986.282952685" lastFinishedPulling="2026-03-14 10:20:02.289207458 +0000 UTC m=+4987.277447833" observedRunningTime="2026-03-14 10:20:02.95121769 +0000 UTC m=+4987.939458065" watchObservedRunningTime="2026-03-14 10:20:02.95366842 +0000 UTC m=+4987.941908795" Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.173176 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.174477 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.229053 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.947491 4687 generic.go:334] "Generic (PLEG): container finished" podID="de177a96-56d3-4450-9e34-ab28e17b5fd1" containerID="2ab6e0f4c9fdab20b2e251117651cec7ddf2261a454f9a2cf15e1ede90d1fa9e" exitCode=0 Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.947691 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" event={"ID":"de177a96-56d3-4450-9e34-ab28e17b5fd1","Type":"ContainerDied","Data":"2ab6e0f4c9fdab20b2e251117651cec7ddf2261a454f9a2cf15e1ede90d1fa9e"} Mar 14 10:20:03 crc kubenswrapper[4687]: I0314 10:20:03.951511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerStarted","Data":"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0"} Mar 14 10:20:04 crc kubenswrapper[4687]: I0314 10:20:04.003696 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.340717 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.403101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w5lb\" (UniqueName: \"kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb\") pod \"de177a96-56d3-4450-9e34-ab28e17b5fd1\" (UID: \"de177a96-56d3-4450-9e34-ab28e17b5fd1\") " Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.409480 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb" (OuterVolumeSpecName: "kube-api-access-2w5lb") pod "de177a96-56d3-4450-9e34-ab28e17b5fd1" (UID: "de177a96-56d3-4450-9e34-ab28e17b5fd1"). InnerVolumeSpecName "kube-api-access-2w5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.505160 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.506075 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w5lb\" (UniqueName: \"kubernetes.io/projected/de177a96-56d3-4450-9e34-ab28e17b5fd1-kube-api-access-2w5lb\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.966810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" event={"ID":"de177a96-56d3-4450-9e34-ab28e17b5fd1","Type":"ContainerDied","Data":"527e26468629b2311d419993918f22f6f3544f219266d7e8873cfc5a260509c2"} Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.966851 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-tt4nq" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.966866 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527e26468629b2311d419993918f22f6f3544f219266d7e8873cfc5a260509c2" Mar 14 10:20:05 crc kubenswrapper[4687]: I0314 10:20:05.966962 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhxnc" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="registry-server" containerID="cri-o://89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970" gracePeriod=2 Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.022034 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-xbn4t"] Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.030963 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-xbn4t"] Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.526490 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.625804 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities\") pod \"c121e6be-10af-470d-972c-ea1eaa0ba126\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.625988 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csrwg\" (UniqueName: \"kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg\") pod \"c121e6be-10af-470d-972c-ea1eaa0ba126\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.626220 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content\") pod \"c121e6be-10af-470d-972c-ea1eaa0ba126\" (UID: \"c121e6be-10af-470d-972c-ea1eaa0ba126\") " Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.626710 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities" (OuterVolumeSpecName: "utilities") pod "c121e6be-10af-470d-972c-ea1eaa0ba126" (UID: "c121e6be-10af-470d-972c-ea1eaa0ba126"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.644575 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg" (OuterVolumeSpecName: "kube-api-access-csrwg") pod "c121e6be-10af-470d-972c-ea1eaa0ba126" (UID: "c121e6be-10af-470d-972c-ea1eaa0ba126"). InnerVolumeSpecName "kube-api-access-csrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.657424 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c121e6be-10af-470d-972c-ea1eaa0ba126" (UID: "c121e6be-10af-470d-972c-ea1eaa0ba126"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.729787 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.729848 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c121e6be-10af-470d-972c-ea1eaa0ba126-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.729868 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csrwg\" (UniqueName: \"kubernetes.io/projected/c121e6be-10af-470d-972c-ea1eaa0ba126-kube-api-access-csrwg\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.983791 4687 generic.go:334] "Generic (PLEG): container finished" podID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerID="89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970" exitCode=0 Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.984100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerDied","Data":"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970"} Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.984134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhxnc" event={"ID":"c121e6be-10af-470d-972c-ea1eaa0ba126","Type":"ContainerDied","Data":"f029a360fd0e17cb244bcadaef6d92ffca9662d653003d07635b9a984e20a194"} Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.984158 4687 scope.go:117] "RemoveContainer" containerID="89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970" Mar 14 10:20:06 crc kubenswrapper[4687]: I0314 10:20:06.984511 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhxnc" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.030357 4687 scope.go:117] "RemoveContainer" containerID="6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.032031 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.040491 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhxnc"] Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.064628 4687 scope.go:117] "RemoveContainer" containerID="8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.099416 4687 scope.go:117] "RemoveContainer" containerID="89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970" Mar 14 10:20:07 crc kubenswrapper[4687]: E0314 10:20:07.100006 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970\": container with ID starting with 89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970 not found: ID does not exist" containerID="89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.100049 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970"} err="failed to get container status \"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970\": rpc error: code = NotFound desc = could not find container \"89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970\": container with ID starting with 89254b6c0431412ed5be42108d8c4b68ebd474fba9b0e811f263ebd88499e970 not found: ID does not exist" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.100101 4687 scope.go:117] "RemoveContainer" containerID="6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4" Mar 14 10:20:07 crc kubenswrapper[4687]: E0314 10:20:07.100511 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4\": container with ID starting with 6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4 not found: ID does not exist" containerID="6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.100541 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4"} err="failed to get container status \"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4\": rpc error: code = NotFound desc = could not find container \"6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4\": container with ID starting with 6c14dc2b7cc2fa1c3b69d61b728a3f4451908f6ab8be6ee5460237976bfc9ad4 not found: ID does not exist" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.100560 4687 scope.go:117] "RemoveContainer" containerID="8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219" Mar 14 10:20:07 crc kubenswrapper[4687]: E0314 10:20:07.100989 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219\": container with ID starting with 8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219 not found: ID does not exist" containerID="8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.101113 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219"} err="failed to get container status \"8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219\": rpc error: code = NotFound desc = could not find container \"8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219\": container with ID starting with 8b4f9d5aa3c7ed097af725cbd5bd0b50acc2802e37832ad0e7029f1903b49219 not found: ID does not exist" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.751130 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18407777-1d9d-47fd-b34e-567a0066c2c3" path="/var/lib/kubelet/pods/18407777-1d9d-47fd-b34e-567a0066c2c3/volumes" Mar 14 10:20:07 crc kubenswrapper[4687]: I0314 10:20:07.751914 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" path="/var/lib/kubelet/pods/c121e6be-10af-470d-972c-ea1eaa0ba126/volumes" Mar 14 10:20:10 crc kubenswrapper[4687]: I0314 10:20:10.020804 4687 generic.go:334] "Generic (PLEG): container finished" podID="a826f790-0937-47c8-a095-4d0ed3baa371" containerID="913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0" exitCode=0 Mar 14 10:20:10 crc kubenswrapper[4687]: I0314 10:20:10.020867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerDied","Data":"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0"} Mar 14 10:20:11 crc kubenswrapper[4687]: I0314 10:20:11.033471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerStarted","Data":"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f"} Mar 14 10:20:11 crc kubenswrapper[4687]: I0314 10:20:11.052582 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-255qg" podStartSLOduration=2.545079765 podStartE2EDuration="10.052565784s" podCreationTimestamp="2026-03-14 10:20:01 +0000 UTC" firstStartedPulling="2026-03-14 10:20:02.940954138 +0000 UTC m=+4987.929194513" lastFinishedPulling="2026-03-14 10:20:10.448440157 +0000 UTC m=+4995.436680532" observedRunningTime="2026-03-14 10:20:11.048156376 +0000 UTC m=+4996.036396741" watchObservedRunningTime="2026-03-14 10:20:11.052565784 +0000 UTC m=+4996.040806159" Mar 14 10:20:11 crc kubenswrapper[4687]: I0314 10:20:11.737833 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:20:11 crc kubenswrapper[4687]: E0314 10:20:11.739516 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:20:12 crc kubenswrapper[4687]: I0314 10:20:12.067127 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:12 crc kubenswrapper[4687]: I0314 10:20:12.067206 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:13 crc kubenswrapper[4687]: I0314 10:20:13.117604 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-255qg" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="registry-server" probeResult="failure" output=< Mar 14 10:20:13 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 14 10:20:13 crc kubenswrapper[4687]: > Mar 14 10:20:13 crc kubenswrapper[4687]: I0314 10:20:13.737671 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:20:13 crc kubenswrapper[4687]: E0314 10:20:13.737943 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.132451 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.188244 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.380285 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.811684 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n67l8_13956215-c64d-402b-9fac-e8deb16c0ea5/control-plane-machine-set-operator/0.log" Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.954350 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hnql_d5694044-0b34-45e7-ab8d-a140eaf37b70/machine-api-operator/0.log" Mar 14 10:20:22 crc kubenswrapper[4687]: I0314 10:20:22.956434 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hnql_d5694044-0b34-45e7-ab8d-a140eaf37b70/kube-rbac-proxy/0.log" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.111308 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.111671 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.139686 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-255qg" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="registry-server" containerID="cri-o://ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f" gracePeriod=2 Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.622088 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.737465 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:20:24 crc kubenswrapper[4687]: E0314 10:20:24.737758 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.781323 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities\") pod \"a826f790-0937-47c8-a095-4d0ed3baa371\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.782181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content\") pod \"a826f790-0937-47c8-a095-4d0ed3baa371\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.782245 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llr8p\" (UniqueName: \"kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p\") pod \"a826f790-0937-47c8-a095-4d0ed3baa371\" (UID: \"a826f790-0937-47c8-a095-4d0ed3baa371\") " Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.782255 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities" (OuterVolumeSpecName: "utilities") pod "a826f790-0937-47c8-a095-4d0ed3baa371" (UID: "a826f790-0937-47c8-a095-4d0ed3baa371"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.797731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p" (OuterVolumeSpecName: "kube-api-access-llr8p") pod "a826f790-0937-47c8-a095-4d0ed3baa371" (UID: "a826f790-0937-47c8-a095-4d0ed3baa371"). InnerVolumeSpecName "kube-api-access-llr8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.798290 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.900066 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llr8p\" (UniqueName: \"kubernetes.io/projected/a826f790-0937-47c8-a095-4d0ed3baa371-kube-api-access-llr8p\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:24 crc kubenswrapper[4687]: I0314 10:20:24.925491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a826f790-0937-47c8-a095-4d0ed3baa371" (UID: "a826f790-0937-47c8-a095-4d0ed3baa371"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.003025 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826f790-0937-47c8-a095-4d0ed3baa371-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.154445 4687 generic.go:334] "Generic (PLEG): container finished" podID="a826f790-0937-47c8-a095-4d0ed3baa371" containerID="ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f" exitCode=0 Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.154494 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerDied","Data":"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f"} Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.154544 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-255qg" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.154568 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-255qg" event={"ID":"a826f790-0937-47c8-a095-4d0ed3baa371","Type":"ContainerDied","Data":"31ca459d9fa6f593b63a38c3c4e66ce9d26997601edf0f91bc52e918f53ca197"} Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.154611 4687 scope.go:117] "RemoveContainer" containerID="ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.175547 4687 scope.go:117] "RemoveContainer" containerID="913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.186530 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.208052 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-255qg"] Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.222309 4687 scope.go:117] "RemoveContainer" containerID="27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.240619 4687 scope.go:117] "RemoveContainer" containerID="ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f" Mar 14 10:20:25 crc kubenswrapper[4687]: E0314 10:20:25.241128 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f\": container with ID starting with ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f not found: ID does not exist" containerID="ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.241157 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f"} err="failed to get container status \"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f\": rpc error: code = NotFound desc = could not find container \"ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f\": container with ID starting with ea260ac75319ad2f271c330ed1ace257dfd483234545443cc73183635773399f not found: ID does not exist" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.241178 4687 scope.go:117] "RemoveContainer" containerID="913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0" Mar 14 10:20:25 crc kubenswrapper[4687]: E0314 10:20:25.241512 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0\": container with ID starting with 913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0 not found: ID does not exist" containerID="913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.241535 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0"} err="failed to get container status \"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0\": rpc error: code = NotFound desc = could not find container \"913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0\": container with ID starting with 913253ff291a5e3f1e90396f0c9fe85fd79bc4625c44757614cfede570b22cb0 not found: ID does not exist" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.241550 4687 scope.go:117] "RemoveContainer" containerID="27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91" Mar 14 10:20:25 crc kubenswrapper[4687]: E0314 10:20:25.241850 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91\": container with ID starting with 27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91 not found: ID does not exist" containerID="27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.241889 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91"} err="failed to get container status \"27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91\": rpc error: code = NotFound desc = could not find container \"27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91\": container with ID starting with 27015e92a78d7c3c2ddbe3650df92338a4af7c6a796a8dd5fd0ef74a15cc3e91 not found: ID does not exist" Mar 14 10:20:25 crc kubenswrapper[4687]: I0314 10:20:25.746949 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" path="/var/lib/kubelet/pods/a826f790-0937-47c8-a095-4d0ed3baa371/volumes" Mar 14 10:20:28 crc kubenswrapper[4687]: I0314 10:20:28.737752 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:20:28 crc kubenswrapper[4687]: E0314 10:20:28.738438 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:20:31 crc kubenswrapper[4687]: I0314 10:20:31.954349 4687 scope.go:117] "RemoveContainer" containerID="11933ae8189e74fae4f70c6f71e19b6a5d0b6d05700cd6b2110b285ad8070bdd" Mar 14 10:20:35 crc kubenswrapper[4687]: I0314 10:20:35.824197 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-nmcjc_9e984e4c-322e-4396-ab02-532fed35dcb4/cert-manager-controller/0.log" Mar 14 10:20:36 crc kubenswrapper[4687]: I0314 10:20:36.022269 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-zxfr6_1b523b43-4cd6-4b84-8e19-6f1f9b5b313c/cert-manager-cainjector/0.log" Mar 14 10:20:36 crc kubenswrapper[4687]: I0314 10:20:36.075458 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sb5j5_30bb39e4-9d81-40e6-bac2-7ab9126ed815/cert-manager-webhook/0.log" Mar 14 10:20:38 crc kubenswrapper[4687]: I0314 10:20:38.737633 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:20:38 crc kubenswrapper[4687]: E0314 10:20:38.738169 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:20:41 crc kubenswrapper[4687]: I0314 10:20:41.737536 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:20:41 crc kubenswrapper[4687]: E0314 10:20:41.738613 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:20:49 crc kubenswrapper[4687]: I0314 10:20:49.736829 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:20:49 crc kubenswrapper[4687]: E0314 10:20:49.737428 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:20:50 crc kubenswrapper[4687]: I0314 10:20:50.105702 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-nd982_ecfaf9cc-3278-4efe-8dcc-050341daded0/nmstate-console-plugin/0.log" Mar 14 10:20:50 crc kubenswrapper[4687]: I0314 10:20:50.998296 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8q525_e25c82b0-63e6-47a6-9111-d35760fac0cf/nmstate-handler/0.log" Mar 14 10:20:51 crc kubenswrapper[4687]: I0314 10:20:51.006723 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-klcqb_cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c/kube-rbac-proxy/0.log" Mar 14 10:20:51 crc kubenswrapper[4687]: I0314 10:20:51.011616 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-klcqb_cd8d6f96-6a1c-40f1-9f6c-debd86d58e1c/nmstate-metrics/0.log" Mar 14 10:20:51 crc kubenswrapper[4687]: I0314 10:20:51.206103 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-zxl4d_2674f9c4-63f3-446d-9dfd-7df8abe18d59/nmstate-operator/0.log" Mar 14 10:20:51 crc kubenswrapper[4687]: I0314 10:20:51.263981 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-h5hgq_76fc1cd0-a564-4de3-9a3b-d420615cd640/nmstate-webhook/0.log" Mar 14 10:20:53 crc kubenswrapper[4687]: I0314 10:20:53.737703 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:20:53 crc kubenswrapper[4687]: E0314 10:20:53.738312 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:20:54 crc kubenswrapper[4687]: I0314 10:20:54.111994 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:20:54 crc kubenswrapper[4687]: I0314 10:20:54.112510 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:21:01 crc kubenswrapper[4687]: I0314 10:21:01.737416 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:21:01 crc kubenswrapper[4687]: E0314 10:21:01.738306 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:21:04 crc kubenswrapper[4687]: I0314 10:21:04.738046 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:21:04 crc kubenswrapper[4687]: E0314 10:21:04.738744 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:21:07 crc kubenswrapper[4687]: I0314 10:21:07.532072 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zvl6t_205a8fe2-db14-415e-9c48-f83103f799a6/prometheus-operator/0.log" Mar 14 10:21:07 crc kubenswrapper[4687]: I0314 10:21:07.634796 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8_7bceb653-5d73-4f96-a7aa-fdd6aaa604f9/prometheus-operator-admission-webhook/0.log" Mar 14 10:21:07 crc kubenswrapper[4687]: I0314 10:21:07.714251 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz_c0e89855-d75c-4904-9632-30763bfbe2d7/prometheus-operator-admission-webhook/0.log" Mar 14 10:21:07 crc kubenswrapper[4687]: I0314 10:21:07.900984 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9mngh_4359a7ad-0b1c-42da-bba2-abfbf773cdfc/operator/0.log" Mar 14 10:21:07 crc kubenswrapper[4687]: I0314 10:21:07.955425 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p2jnw_7d73d1b3-dbad-4f27-8562-e534f69c896c/perses-operator/0.log" Mar 14 10:21:15 crc kubenswrapper[4687]: I0314 10:21:15.743286 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:21:15 crc kubenswrapper[4687]: E0314 10:21:15.743992 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:21:16 crc kubenswrapper[4687]: I0314 10:21:16.736740 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:21:16 crc kubenswrapper[4687]: E0314 10:21:16.737204 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.110890 4687 patch_prober.go:28] interesting pod/machine-config-daemon-s5gw5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.111203 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.111242 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.111961 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec"} pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.112012 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerName="machine-config-daemon" containerID="cri-o://302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" gracePeriod=600 Mar 14 10:21:24 crc kubenswrapper[4687]: E0314 10:21:24.620099 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.701196 4687 generic.go:334] "Generic (PLEG): container finished" podID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" exitCode=0 Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.701239 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerDied","Data":"302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec"} Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.701308 4687 scope.go:117] "RemoveContainer" containerID="1ba9b3ba9b3a86d0bd9986a2fbe6e4c5a51c19113cf6ca6958d54fc8873db8d3" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.702303 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:21:24 crc kubenswrapper[4687]: E0314 10:21:24.704265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:21:24 crc kubenswrapper[4687]: I0314 10:21:24.968559 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-hdk5z_b20ebc9c-3733-4824-bd5a-6b1e6dc1265a/kube-rbac-proxy/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.049439 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-hdk5z_b20ebc9c-3733-4824-bd5a-6b1e6dc1265a/controller/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.103290 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-frr-files/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.244403 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-frr-files/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.256406 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-reloader/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.281957 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-metrics/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.324976 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-reloader/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.477431 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-frr-files/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.490906 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-reloader/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.547616 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-metrics/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.548571 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-metrics/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.734560 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-reloader/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.750544 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-frr-files/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.755040 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/cp-metrics/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.809371 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/controller/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.923061 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/frr-metrics/0.log" Mar 14 10:21:25 crc kubenswrapper[4687]: I0314 10:21:25.955735 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/kube-rbac-proxy/0.log" Mar 14 10:21:26 crc kubenswrapper[4687]: I0314 10:21:26.067070 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/kube-rbac-proxy-frr/0.log" Mar 14 10:21:26 crc kubenswrapper[4687]: I0314 10:21:26.126229 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/reloader/0.log" Mar 14 10:21:26 crc kubenswrapper[4687]: I0314 10:21:26.904858 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74fd5dfb9c-pvx5v_cacb7e09-cbb9-4e89-a898-e6da0f4498b5/manager/0.log" Mar 14 10:21:26 crc kubenswrapper[4687]: I0314 10:21:26.918106 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-dw5bx_30d31ea4-3f62-4a7d-9c0c-162d87bab38a/frr-k8s-webhook-server/0.log" Mar 14 10:21:27 crc kubenswrapper[4687]: I0314 10:21:27.160660 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6464f7b86-mdwhg_77588bd6-38b1-4f43-a701-437cf2c3df99/webhook-server/0.log" Mar 14 10:21:27 crc kubenswrapper[4687]: I0314 10:21:27.324695 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pg6ns_90888982-f2a2-46f1-a099-05070e93b427/frr/0.log" Mar 14 10:21:27 crc kubenswrapper[4687]: I0314 10:21:27.365571 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6qhvs_b69e2289-9be3-45bd-bcea-89dddbc5e1c2/kube-rbac-proxy/0.log" Mar 14 10:21:27 crc kubenswrapper[4687]: I0314 10:21:27.695171 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6qhvs_b69e2289-9be3-45bd-bcea-89dddbc5e1c2/speaker/0.log" Mar 14 10:21:27 crc kubenswrapper[4687]: I0314 10:21:27.740080 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:21:27 crc kubenswrapper[4687]: E0314 10:21:27.740531 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:21:30 crc kubenswrapper[4687]: I0314 10:21:30.737455 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:21:30 crc kubenswrapper[4687]: E0314 10:21:30.738073 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:21:35 crc kubenswrapper[4687]: I0314 10:21:35.750898 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:21:35 crc kubenswrapper[4687]: E0314 10:21:35.751829 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:21:38 crc kubenswrapper[4687]: I0314 10:21:38.738448 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:21:38 crc kubenswrapper[4687]: E0314 10:21:38.739272 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:21:41 crc kubenswrapper[4687]: I0314 10:21:41.639933 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/util/0.log" Mar 14 10:21:41 crc kubenswrapper[4687]: I0314 10:21:41.824025 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/util/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.456033 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/pull/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.456783 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/pull/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.618753 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/util/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.628835 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/pull/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.662542 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xlbq9_26714b78-95c2-42a9-bb50-a728f54b5c8a/extract/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.800080 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/util/0.log" Mar 14 10:21:42 crc kubenswrapper[4687]: I0314 10:21:42.955094 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/util/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.003059 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/pull/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.036860 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/pull/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.207177 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/extract/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.213978 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/pull/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.263326 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k4wnx_4f31a3c2-0deb-4826-ab7e-0da7a8091f19/util/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.405805 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/util/0.log" Mar 14 10:21:43 crc kubenswrapper[4687]: I0314 10:21:43.737524 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:21:43 crc kubenswrapper[4687]: E0314 10:21:43.737754 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.039732 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/pull/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.059653 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/util/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.061137 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/pull/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.188652 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/pull/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.207171 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/util/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.261844 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nmdcz_575133e9-f490-4a9e-b062-f8b33b86ef27/extract/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.389654 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-utilities/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.576823 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-content/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.581130 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-utilities/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.605471 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-content/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.762563 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-utilities/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.789695 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/extract-content/0.log" Mar 14 10:21:44 crc kubenswrapper[4687]: I0314 10:21:44.997673 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-utilities/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.220449 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-utilities/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.224870 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-content/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.368629 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-content/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.410945 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xr8fx_cd8a2a9c-9fbb-417e-9428-503e7899305c/registry-server/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.445180 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-utilities/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.450625 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/extract-content/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.661083 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lpppm_70b65e8d-b8f9-44a2-a358-ca1f78d3ed7f/marketplace-operator/0.log" Mar 14 10:21:45 crc kubenswrapper[4687]: I0314 10:21:45.830960 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.033157 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.068535 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-content/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.153420 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-content/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.343927 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.386768 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/extract-content/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.523962 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.549877 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4cn6_3bbf9d41-c0f1-426b-bf77-578011dacfd5/registry-server/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.605927 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hbcd7_fff21ca7-1a0b-4a6d-84c2-2605625b4e62/registry-server/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.726519 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.731691 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-content/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.751435 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-content/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.891356 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-utilities/0.log" Mar 14 10:21:46 crc kubenswrapper[4687]: I0314 10:21:46.930885 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/extract-content/0.log" Mar 14 10:21:47 crc kubenswrapper[4687]: I0314 10:21:47.464378 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fdzgw_c235d724-9cf5-4fb3-92fe-2da6bb33abed/registry-server/0.log" Mar 14 10:21:49 crc kubenswrapper[4687]: I0314 10:21:49.737853 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:21:49 crc kubenswrapper[4687]: E0314 10:21:49.738684 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:21:49 crc kubenswrapper[4687]: I0314 10:21:49.739264 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:21:49 crc kubenswrapper[4687]: E0314 10:21:49.739516 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:21:57 crc kubenswrapper[4687]: I0314 10:21:57.737823 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:21:57 crc kubenswrapper[4687]: E0314 10:21:57.738606 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.150660 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558062-dcdlp"] Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151364 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151377 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151410 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151416 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151426 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151442 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151455 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151460 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151473 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151478 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151491 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151496 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4687]: E0314 10:22:00.151511 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de177a96-56d3-4450-9e34-ab28e17b5fd1" containerName="oc" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151517 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="de177a96-56d3-4450-9e34-ab28e17b5fd1" containerName="oc" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151715 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="de177a96-56d3-4450-9e34-ab28e17b5fd1" containerName="oc" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151745 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c121e6be-10af-470d-972c-ea1eaa0ba126" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.151768 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a826f790-0937-47c8-a095-4d0ed3baa371" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.152592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.154926 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.155157 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.155842 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.160952 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558062-dcdlp"] Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.237898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpwj\" (UniqueName: \"kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj\") pod \"auto-csr-approver-29558062-dcdlp\" (UID: \"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec\") " pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.340129 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpwj\" (UniqueName: \"kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj\") pod \"auto-csr-approver-29558062-dcdlp\" (UID: \"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec\") " pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.360268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpwj\" (UniqueName: \"kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj\") pod \"auto-csr-approver-29558062-dcdlp\" (UID: \"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec\") " pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.483566 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:00 crc kubenswrapper[4687]: I0314 10:22:00.994898 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558062-dcdlp"] Mar 14 10:22:01 crc kubenswrapper[4687]: I0314 10:22:01.010382 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:22:01 crc kubenswrapper[4687]: I0314 10:22:01.302581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" event={"ID":"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec","Type":"ContainerStarted","Data":"fadd3dc2c57ac00cce5e15c6555290341a615075b6faaa0aed2ebf81718e592e"} Mar 14 10:22:02 crc kubenswrapper[4687]: I0314 10:22:02.312232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" event={"ID":"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec","Type":"ContainerStarted","Data":"f4604c59335513276180f48140a21a185dc95057dc1a940f87db79f94658c922"} Mar 14 10:22:02 crc kubenswrapper[4687]: I0314 10:22:02.328859 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" podStartSLOduration=1.584092206 podStartE2EDuration="2.328841972s" podCreationTimestamp="2026-03-14 10:22:00 +0000 UTC" firstStartedPulling="2026-03-14 10:22:01.01012656 +0000 UTC m=+5105.998366935" lastFinishedPulling="2026-03-14 10:22:01.754876336 +0000 UTC m=+5106.743116701" observedRunningTime="2026-03-14 10:22:02.323114662 +0000 UTC m=+5107.311355037" watchObservedRunningTime="2026-03-14 10:22:02.328841972 +0000 UTC m=+5107.317082347" Mar 14 10:22:02 crc kubenswrapper[4687]: I0314 10:22:02.389652 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77d9df5d76-mhprz_c0e89855-d75c-4904-9632-30763bfbe2d7/prometheus-operator-admission-webhook/0.log" Mar 14 10:22:02 crc kubenswrapper[4687]: I0314 10:22:02.404487 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zvl6t_205a8fe2-db14-415e-9c48-f83103f799a6/prometheus-operator/0.log" Mar 14 10:22:02 crc kubenswrapper[4687]: I0314 10:22:02.415475 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77d9df5d76-fjtr8_7bceb653-5d73-4f96-a7aa-fdd6aaa604f9/prometheus-operator-admission-webhook/0.log" Mar 14 10:22:03 crc kubenswrapper[4687]: I0314 10:22:03.177188 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9mngh_4359a7ad-0b1c-42da-bba2-abfbf773cdfc/operator/0.log" Mar 14 10:22:03 crc kubenswrapper[4687]: I0314 10:22:03.270118 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p2jnw_7d73d1b3-dbad-4f27-8562-e534f69c896c/perses-operator/0.log" Mar 14 10:22:03 crc kubenswrapper[4687]: I0314 10:22:03.346792 4687 generic.go:334] "Generic (PLEG): container finished" podID="aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec" containerID="f4604c59335513276180f48140a21a185dc95057dc1a940f87db79f94658c922" exitCode=0 Mar 14 10:22:03 crc kubenswrapper[4687]: I0314 10:22:03.346840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" event={"ID":"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec","Type":"ContainerDied","Data":"f4604c59335513276180f48140a21a185dc95057dc1a940f87db79f94658c922"} Mar 14 10:22:04 crc kubenswrapper[4687]: I0314 10:22:04.737076 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:22:04 crc kubenswrapper[4687]: E0314 10:22:04.737626 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:22:04 crc kubenswrapper[4687]: I0314 10:22:04.737980 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:22:04 crc kubenswrapper[4687]: E0314 10:22:04.738220 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.054594 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.136093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpwj\" (UniqueName: \"kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj\") pod \"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec\" (UID: \"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec\") " Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.144737 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj" (OuterVolumeSpecName: "kube-api-access-7kpwj") pod "aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec" (UID: "aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec"). InnerVolumeSpecName "kube-api-access-7kpwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.238524 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpwj\" (UniqueName: \"kubernetes.io/projected/aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec-kube-api-access-7kpwj\") on node \"crc\" DevicePath \"\"" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.363625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" event={"ID":"aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec","Type":"ContainerDied","Data":"fadd3dc2c57ac00cce5e15c6555290341a615075b6faaa0aed2ebf81718e592e"} Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.363662 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fadd3dc2c57ac00cce5e15c6555290341a615075b6faaa0aed2ebf81718e592e" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.363675 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-dcdlp" Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.423067 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-bmzfv"] Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.431935 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-bmzfv"] Mar 14 10:22:05 crc kubenswrapper[4687]: I0314 10:22:05.749047 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90232b1d-0f84-4fe9-882f-382f3ceb76a8" path="/var/lib/kubelet/pods/90232b1d-0f84-4fe9-882f-382f3ceb76a8/volumes" Mar 14 10:22:08 crc kubenswrapper[4687]: I0314 10:22:08.737575 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:22:08 crc kubenswrapper[4687]: E0314 10:22:08.738530 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:22:15 crc kubenswrapper[4687]: I0314 10:22:15.737418 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:22:15 crc kubenswrapper[4687]: E0314 10:22:15.738106 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:22:19 crc kubenswrapper[4687]: I0314 10:22:19.743002 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:22:19 crc kubenswrapper[4687]: E0314 10:22:19.744026 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:22:21 crc kubenswrapper[4687]: I0314 10:22:21.736788 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:22:21 crc kubenswrapper[4687]: E0314 10:22:21.737287 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:22:30 crc kubenswrapper[4687]: I0314 10:22:30.736982 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:22:30 crc kubenswrapper[4687]: E0314 10:22:30.738487 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:22:32 crc kubenswrapper[4687]: I0314 10:22:32.109830 4687 scope.go:117] "RemoveContainer" containerID="c5ce01a8e2ab3ad5764f935d360abac31beda6498db5c3553a618084f9b40488" Mar 14 10:22:34 crc kubenswrapper[4687]: I0314 10:22:34.737278 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:22:34 crc kubenswrapper[4687]: E0314 10:22:34.738158 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:22:35 crc kubenswrapper[4687]: I0314 10:22:35.750691 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:22:35 crc kubenswrapper[4687]: E0314 10:22:35.751282 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:22:45 crc kubenswrapper[4687]: I0314 10:22:45.744838 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:22:45 crc kubenswrapper[4687]: E0314 10:22:45.747476 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:22:45 crc kubenswrapper[4687]: I0314 10:22:45.748488 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:22:45 crc kubenswrapper[4687]: E0314 10:22:45.749178 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:22:50 crc kubenswrapper[4687]: I0314 10:22:50.737013 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:22:50 crc kubenswrapper[4687]: E0314 10:22:50.737933 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:22:56 crc kubenswrapper[4687]: I0314 10:22:56.736705 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:22:56 crc kubenswrapper[4687]: E0314 10:22:56.737588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:22:58 crc kubenswrapper[4687]: I0314 10:22:58.736933 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:22:58 crc kubenswrapper[4687]: E0314 10:22:58.737714 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:23:03 crc kubenswrapper[4687]: I0314 10:23:03.740931 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:23:03 crc kubenswrapper[4687]: E0314 10:23:03.742084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:23:09 crc kubenswrapper[4687]: I0314 10:23:09.738576 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:23:09 crc kubenswrapper[4687]: E0314 10:23:09.739484 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:23:11 crc kubenswrapper[4687]: I0314 10:23:11.736909 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:23:11 crc kubenswrapper[4687]: E0314 10:23:11.737477 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:23:16 crc kubenswrapper[4687]: I0314 10:23:16.736773 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:23:16 crc kubenswrapper[4687]: E0314 10:23:16.737456 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:23:22 crc kubenswrapper[4687]: I0314 10:23:22.737740 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:23:22 crc kubenswrapper[4687]: E0314 10:23:22.738800 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:23:26 crc kubenswrapper[4687]: I0314 10:23:26.737711 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:23:26 crc kubenswrapper[4687]: E0314 10:23:26.738939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:23:29 crc kubenswrapper[4687]: I0314 10:23:29.737499 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:23:29 crc kubenswrapper[4687]: E0314 10:23:29.738472 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:23:32 crc kubenswrapper[4687]: I0314 10:23:32.343781 4687 generic.go:334] "Generic (PLEG): container finished" podID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerID="8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc" exitCode=0 Mar 14 10:23:32 crc kubenswrapper[4687]: I0314 10:23:32.343857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" event={"ID":"44d0bd02-b076-408e-987c-1394b6fb6f0d","Type":"ContainerDied","Data":"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc"} Mar 14 10:23:32 crc kubenswrapper[4687]: I0314 10:23:32.345029 4687 scope.go:117] "RemoveContainer" containerID="8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc" Mar 14 10:23:32 crc kubenswrapper[4687]: I0314 10:23:32.455852 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sfb6d_must-gather-lkqfv_44d0bd02-b076-408e-987c-1394b6fb6f0d/gather/0.log" Mar 14 10:23:33 crc kubenswrapper[4687]: I0314 10:23:33.756448 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:23:33 crc kubenswrapper[4687]: E0314 10:23:33.759322 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:23:40 crc kubenswrapper[4687]: I0314 10:23:40.609711 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sfb6d/must-gather-lkqfv"] Mar 14 10:23:40 crc kubenswrapper[4687]: I0314 10:23:40.610869 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="copy" containerID="cri-o://5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24" gracePeriod=2 Mar 14 10:23:40 crc kubenswrapper[4687]: I0314 10:23:40.626380 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sfb6d/must-gather-lkqfv"] Mar 14 10:23:40 crc kubenswrapper[4687]: I0314 10:23:40.737147 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:23:40 crc kubenswrapper[4687]: E0314 10:23:40.737662 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.047417 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sfb6d_must-gather-lkqfv_44d0bd02-b076-408e-987c-1394b6fb6f0d/copy/0.log" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.048126 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.142958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output\") pod \"44d0bd02-b076-408e-987c-1394b6fb6f0d\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.143093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbwkp\" (UniqueName: \"kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp\") pod \"44d0bd02-b076-408e-987c-1394b6fb6f0d\" (UID: \"44d0bd02-b076-408e-987c-1394b6fb6f0d\") " Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.154038 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp" (OuterVolumeSpecName: "kube-api-access-nbwkp") pod "44d0bd02-b076-408e-987c-1394b6fb6f0d" (UID: "44d0bd02-b076-408e-987c-1394b6fb6f0d"). InnerVolumeSpecName "kube-api-access-nbwkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.245409 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbwkp\" (UniqueName: \"kubernetes.io/projected/44d0bd02-b076-408e-987c-1394b6fb6f0d-kube-api-access-nbwkp\") on node \"crc\" DevicePath \"\"" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.303045 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "44d0bd02-b076-408e-987c-1394b6fb6f0d" (UID: "44d0bd02-b076-408e-987c-1394b6fb6f0d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.347488 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44d0bd02-b076-408e-987c-1394b6fb6f0d-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.430227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sfb6d_must-gather-lkqfv_44d0bd02-b076-408e-987c-1394b6fb6f0d/copy/0.log" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.430725 4687 generic.go:334] "Generic (PLEG): container finished" podID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerID="5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24" exitCode=143 Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.430783 4687 scope.go:117] "RemoveContainer" containerID="5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.430804 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sfb6d/must-gather-lkqfv" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.449638 4687 scope.go:117] "RemoveContainer" containerID="8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.501246 4687 scope.go:117] "RemoveContainer" containerID="5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24" Mar 14 10:23:41 crc kubenswrapper[4687]: E0314 10:23:41.501803 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24\": container with ID starting with 5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24 not found: ID does not exist" containerID="5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.501851 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24"} err="failed to get container status \"5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24\": rpc error: code = NotFound desc = could not find container \"5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24\": container with ID starting with 5d4fece078763b11cf87b82b4faf99f8f3117245115d0c7eacba42706633be24 not found: ID does not exist" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.501883 4687 scope.go:117] "RemoveContainer" containerID="8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc" Mar 14 10:23:41 crc kubenswrapper[4687]: E0314 10:23:41.502263 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc\": container with ID starting with 8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc not found: ID does not exist" containerID="8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.502311 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc"} err="failed to get container status \"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc\": rpc error: code = NotFound desc = could not find container \"8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc\": container with ID starting with 8ca47ed74623b6f4039e8e79a12771eaf3d85d11aab18b759a3140ac28331ffc not found: ID does not exist" Mar 14 10:23:41 crc kubenswrapper[4687]: I0314 10:23:41.748282 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" path="/var/lib/kubelet/pods/44d0bd02-b076-408e-987c-1394b6fb6f0d/volumes" Mar 14 10:23:43 crc kubenswrapper[4687]: I0314 10:23:43.737833 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:23:44 crc kubenswrapper[4687]: I0314 10:23:44.469291 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerStarted","Data":"e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2"} Mar 14 10:23:45 crc kubenswrapper[4687]: I0314 10:23:45.760999 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:23:45 crc kubenswrapper[4687]: E0314 10:23:45.761648 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:23:52 crc kubenswrapper[4687]: I0314 10:23:52.219977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:23:52 crc kubenswrapper[4687]: I0314 10:23:52.221386 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:23:52 crc kubenswrapper[4687]: I0314 10:23:52.737650 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:23:52 crc kubenswrapper[4687]: E0314 10:23:52.737900 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:23:53 crc kubenswrapper[4687]: I0314 10:23:53.552100 4687 generic.go:334] "Generic (PLEG): container finished" podID="00a62493-95c1-4765-8b9e-4188b68c587c" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" exitCode=1 Mar 14 10:23:53 crc kubenswrapper[4687]: I0314 10:23:53.552138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dcd9ff5b-bprxd" event={"ID":"00a62493-95c1-4765-8b9e-4188b68c587c","Type":"ContainerDied","Data":"e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2"} Mar 14 10:23:53 crc kubenswrapper[4687]: I0314 10:23:53.552402 4687 scope.go:117] "RemoveContainer" containerID="66be615aa9504c9c8ff696269fbd84fa43166b6a8e2930d370f841cf384fc888" Mar 14 10:23:53 crc kubenswrapper[4687]: I0314 10:23:53.553168 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:23:53 crc kubenswrapper[4687]: E0314 10:23:53.553373 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:23:59 crc kubenswrapper[4687]: I0314 10:23:59.736925 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:23:59 crc kubenswrapper[4687]: E0314 10:23:59.737852 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.139728 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558064-7n6bg"] Mar 14 10:24:00 crc kubenswrapper[4687]: E0314 10:24:00.140289 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140306 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4687]: E0314 10:24:00.140324 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="gather" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140346 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="gather" Mar 14 10:24:00 crc kubenswrapper[4687]: E0314 10:24:00.140357 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="copy" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140364 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="copy" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140554 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaac5bf0-d619-45b4-88c4-21c6e1e5b3ec" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140574 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="gather" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.140592 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d0bd02-b076-408e-987c-1394b6fb6f0d" containerName="copy" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.141223 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.144569 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.145143 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.150270 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.154773 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558064-7n6bg"] Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.242233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hx94\" (UniqueName: \"kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94\") pod \"auto-csr-approver-29558064-7n6bg\" (UID: \"4c5be75e-125f-41c2-9334-784c9a70e6f7\") " pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.344265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hx94\" (UniqueName: \"kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94\") pod \"auto-csr-approver-29558064-7n6bg\" (UID: \"4c5be75e-125f-41c2-9334-784c9a70e6f7\") " pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.662012 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hx94\" (UniqueName: \"kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94\") pod \"auto-csr-approver-29558064-7n6bg\" (UID: \"4c5be75e-125f-41c2-9334-784c9a70e6f7\") " pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:00 crc kubenswrapper[4687]: I0314 10:24:00.757517 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:01 crc kubenswrapper[4687]: I0314 10:24:01.261574 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558064-7n6bg"] Mar 14 10:24:01 crc kubenswrapper[4687]: I0314 10:24:01.656618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" event={"ID":"4c5be75e-125f-41c2-9334-784c9a70e6f7","Type":"ContainerStarted","Data":"d4d39a16b1525ff3dca1aa63c7189f87c7a53ee46222bbcc28ae0943e8956479"} Mar 14 10:24:02 crc kubenswrapper[4687]: I0314 10:24:02.219836 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:24:02 crc kubenswrapper[4687]: I0314 10:24:02.220271 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dcd9ff5b-bprxd" Mar 14 10:24:02 crc kubenswrapper[4687]: I0314 10:24:02.221164 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:24:02 crc kubenswrapper[4687]: E0314 10:24:02.221528 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:24:02 crc kubenswrapper[4687]: I0314 10:24:02.675312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" event={"ID":"4c5be75e-125f-41c2-9334-784c9a70e6f7","Type":"ContainerStarted","Data":"2cac57d22a65634947cce7fcf08e975e14100030aa50575f6e304e6c39ff0385"} Mar 14 10:24:02 crc kubenswrapper[4687]: I0314 10:24:02.695655 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" podStartSLOduration=1.898693272 podStartE2EDuration="2.695636183s" podCreationTimestamp="2026-03-14 10:24:00 +0000 UTC" firstStartedPulling="2026-03-14 10:24:01.286487035 +0000 UTC m=+5226.274727450" lastFinishedPulling="2026-03-14 10:24:02.083429976 +0000 UTC m=+5227.071670361" observedRunningTime="2026-03-14 10:24:02.687059832 +0000 UTC m=+5227.675300227" watchObservedRunningTime="2026-03-14 10:24:02.695636183 +0000 UTC m=+5227.683876568" Mar 14 10:24:03 crc kubenswrapper[4687]: I0314 10:24:03.717070 4687 generic.go:334] "Generic (PLEG): container finished" podID="4c5be75e-125f-41c2-9334-784c9a70e6f7" containerID="2cac57d22a65634947cce7fcf08e975e14100030aa50575f6e304e6c39ff0385" exitCode=0 Mar 14 10:24:03 crc kubenswrapper[4687]: I0314 10:24:03.718382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" event={"ID":"4c5be75e-125f-41c2-9334-784c9a70e6f7","Type":"ContainerDied","Data":"2cac57d22a65634947cce7fcf08e975e14100030aa50575f6e304e6c39ff0385"} Mar 14 10:24:04 crc kubenswrapper[4687]: I0314 10:24:04.737526 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.111392 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.146341 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hx94\" (UniqueName: \"kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94\") pod \"4c5be75e-125f-41c2-9334-784c9a70e6f7\" (UID: \"4c5be75e-125f-41c2-9334-784c9a70e6f7\") " Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.158731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94" (OuterVolumeSpecName: "kube-api-access-4hx94") pod "4c5be75e-125f-41c2-9334-784c9a70e6f7" (UID: "4c5be75e-125f-41c2-9334-784c9a70e6f7"). InnerVolumeSpecName "kube-api-access-4hx94". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.248402 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hx94\" (UniqueName: \"kubernetes.io/projected/4c5be75e-125f-41c2-9334-784c9a70e6f7-kube-api-access-4hx94\") on node \"crc\" DevicePath \"\"" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.742458 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.750772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-7n6bg" event={"ID":"4c5be75e-125f-41c2-9334-784c9a70e6f7","Type":"ContainerDied","Data":"d4d39a16b1525ff3dca1aa63c7189f87c7a53ee46222bbcc28ae0943e8956479"} Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.750941 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d39a16b1525ff3dca1aa63c7189f87c7a53ee46222bbcc28ae0943e8956479" Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.750952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerStarted","Data":"7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5"} Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.773689 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-skn62"] Mar 14 10:24:05 crc kubenswrapper[4687]: I0314 10:24:05.784962 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-skn62"] Mar 14 10:24:07 crc kubenswrapper[4687]: I0314 10:24:07.760714 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde34560-2213-45da-bebf-2ef1b97169f7" path="/var/lib/kubelet/pods/fde34560-2213-45da-bebf-2ef1b97169f7/volumes" Mar 14 10:24:10 crc kubenswrapper[4687]: I0314 10:24:10.737312 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:24:10 crc kubenswrapper[4687]: E0314 10:24:10.738282 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:24:12 crc kubenswrapper[4687]: I0314 10:24:12.127851 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:24:12 crc kubenswrapper[4687]: I0314 10:24:12.128216 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:24:13 crc kubenswrapper[4687]: I0314 10:24:13.841165 4687 generic.go:334] "Generic (PLEG): container finished" podID="a89460b9-5c8a-4000-ac6a-6202699a10d1" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" exitCode=1 Mar 14 10:24:13 crc kubenswrapper[4687]: I0314 10:24:13.841244 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f987fc4-zw2rw" event={"ID":"a89460b9-5c8a-4000-ac6a-6202699a10d1","Type":"ContainerDied","Data":"7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5"} Mar 14 10:24:13 crc kubenswrapper[4687]: I0314 10:24:13.841582 4687 scope.go:117] "RemoveContainer" containerID="80b564b433a63b273b20746ad4d23886a334faef5f5a81c3a11415f653589a23" Mar 14 10:24:13 crc kubenswrapper[4687]: I0314 10:24:13.842654 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:13 crc kubenswrapper[4687]: E0314 10:24:13.843084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:24:15 crc kubenswrapper[4687]: I0314 10:24:15.748140 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:24:15 crc kubenswrapper[4687]: E0314 10:24:15.748642 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:24:21 crc kubenswrapper[4687]: I0314 10:24:21.737041 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:24:21 crc kubenswrapper[4687]: E0314 10:24:21.737781 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:24:22 crc kubenswrapper[4687]: I0314 10:24:22.128538 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:24:22 crc kubenswrapper[4687]: I0314 10:24:22.129269 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:22 crc kubenswrapper[4687]: E0314 10:24:22.129532 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:24:22 crc kubenswrapper[4687]: I0314 10:24:22.129566 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74f987fc4-zw2rw" Mar 14 10:24:22 crc kubenswrapper[4687]: I0314 10:24:22.933242 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:22 crc kubenswrapper[4687]: E0314 10:24:22.933607 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:24:26 crc kubenswrapper[4687]: I0314 10:24:26.738699 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:24:26 crc kubenswrapper[4687]: E0314 10:24:26.739571 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:24:32 crc kubenswrapper[4687]: I0314 10:24:32.213739 4687 scope.go:117] "RemoveContainer" containerID="99149720dafaa0e437d905f4b99d5d59ab8c8a449a0f19ec762adafc0308c22a" Mar 14 10:24:32 crc kubenswrapper[4687]: I0314 10:24:32.283896 4687 scope.go:117] "RemoveContainer" containerID="c2cb3d912c7760e01c8d19fac36b5c4783b22fced433445966fe829beb26f228" Mar 14 10:24:34 crc kubenswrapper[4687]: I0314 10:24:34.737156 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:24:34 crc kubenswrapper[4687]: E0314 10:24:34.738658 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:24:36 crc kubenswrapper[4687]: I0314 10:24:36.738234 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:36 crc kubenswrapper[4687]: E0314 10:24:36.739134 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:24:40 crc kubenswrapper[4687]: I0314 10:24:40.737263 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:24:40 crc kubenswrapper[4687]: E0314 10:24:40.738479 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:24:47 crc kubenswrapper[4687]: I0314 10:24:47.737206 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:47 crc kubenswrapper[4687]: E0314 10:24:47.738081 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:24:49 crc kubenswrapper[4687]: I0314 10:24:49.737607 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:24:49 crc kubenswrapper[4687]: E0314 10:24:49.738217 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:24:55 crc kubenswrapper[4687]: I0314 10:24:55.752363 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:24:55 crc kubenswrapper[4687]: E0314 10:24:55.753183 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:24:58 crc kubenswrapper[4687]: I0314 10:24:58.737970 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:24:58 crc kubenswrapper[4687]: E0314 10:24:58.739139 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:25:03 crc kubenswrapper[4687]: I0314 10:25:03.736780 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:25:03 crc kubenswrapper[4687]: E0314 10:25:03.737590 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:25:06 crc kubenswrapper[4687]: I0314 10:25:06.737889 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:25:06 crc kubenswrapper[4687]: E0314 10:25:06.738417 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:25:13 crc kubenswrapper[4687]: I0314 10:25:13.736994 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:25:13 crc kubenswrapper[4687]: E0314 10:25:13.737576 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:25:17 crc kubenswrapper[4687]: I0314 10:25:17.736742 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:25:17 crc kubenswrapper[4687]: I0314 10:25:17.738529 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:25:17 crc kubenswrapper[4687]: E0314 10:25:17.738737 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:25:17 crc kubenswrapper[4687]: E0314 10:25:17.738972 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:25:28 crc kubenswrapper[4687]: I0314 10:25:28.738590 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:25:28 crc kubenswrapper[4687]: E0314 10:25:28.739663 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:25:29 crc kubenswrapper[4687]: I0314 10:25:29.738183 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:25:29 crc kubenswrapper[4687]: E0314 10:25:29.739262 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:25:32 crc kubenswrapper[4687]: I0314 10:25:32.737289 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:25:32 crc kubenswrapper[4687]: E0314 10:25:32.738055 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:25:40 crc kubenswrapper[4687]: I0314 10:25:40.737967 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:25:40 crc kubenswrapper[4687]: E0314 10:25:40.739841 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:25:43 crc kubenswrapper[4687]: I0314 10:25:43.737914 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:25:43 crc kubenswrapper[4687]: E0314 10:25:43.739635 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:25:45 crc kubenswrapper[4687]: I0314 10:25:45.749778 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:25:45 crc kubenswrapper[4687]: E0314 10:25:45.750425 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:25:51 crc kubenswrapper[4687]: I0314 10:25:51.737448 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:25:51 crc kubenswrapper[4687]: E0314 10:25:51.738107 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:25:54 crc kubenswrapper[4687]: I0314 10:25:54.738685 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:25:54 crc kubenswrapper[4687]: E0314 10:25:54.739680 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:25:59 crc kubenswrapper[4687]: I0314 10:25:59.737211 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:25:59 crc kubenswrapper[4687]: E0314 10:25:59.737980 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.151572 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558066-9rd6t"] Mar 14 10:26:00 crc kubenswrapper[4687]: E0314 10:26:00.152262 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5be75e-125f-41c2-9334-784c9a70e6f7" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.152291 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5be75e-125f-41c2-9334-784c9a70e6f7" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.152743 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5be75e-125f-41c2-9334-784c9a70e6f7" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.153888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.156911 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.157248 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.159388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-qmrf7" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.182028 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558066-9rd6t"] Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.266503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cwn\" (UniqueName: \"kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn\") pod \"auto-csr-approver-29558066-9rd6t\" (UID: \"fd52c34b-5bb3-4513-a475-dfcc5dfa9471\") " pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.368635 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cwn\" (UniqueName: \"kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn\") pod \"auto-csr-approver-29558066-9rd6t\" (UID: \"fd52c34b-5bb3-4513-a475-dfcc5dfa9471\") " pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.396666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cwn\" (UniqueName: \"kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn\") pod \"auto-csr-approver-29558066-9rd6t\" (UID: \"fd52c34b-5bb3-4513-a475-dfcc5dfa9471\") " pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.474492 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.953615 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558066-9rd6t"] Mar 14 10:26:00 crc kubenswrapper[4687]: I0314 10:26:00.989145 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" event={"ID":"fd52c34b-5bb3-4513-a475-dfcc5dfa9471","Type":"ContainerStarted","Data":"f3701bd18a1ecea5c8547c1b31ca795b77bac4665194b951bcbc2a62b8ec5fb0"} Mar 14 10:26:03 crc kubenswrapper[4687]: I0314 10:26:03.015349 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" event={"ID":"fd52c34b-5bb3-4513-a475-dfcc5dfa9471","Type":"ContainerStarted","Data":"623ea442859458226d1172185d5fcf7a61052e6442f4ac619e735f5a9a9b2562"} Mar 14 10:26:03 crc kubenswrapper[4687]: I0314 10:26:03.034896 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" podStartSLOduration=2.215272621 podStartE2EDuration="3.034873548s" podCreationTimestamp="2026-03-14 10:26:00 +0000 UTC" firstStartedPulling="2026-03-14 10:26:00.958180894 +0000 UTC m=+5345.946421269" lastFinishedPulling="2026-03-14 10:26:01.777781791 +0000 UTC m=+5346.766022196" observedRunningTime="2026-03-14 10:26:03.030280344 +0000 UTC m=+5348.018520719" watchObservedRunningTime="2026-03-14 10:26:03.034873548 +0000 UTC m=+5348.023113923" Mar 14 10:26:03 crc kubenswrapper[4687]: I0314 10:26:03.737702 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:26:03 crc kubenswrapper[4687]: E0314 10:26:03.738290 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:26:04 crc kubenswrapper[4687]: I0314 10:26:04.028225 4687 generic.go:334] "Generic (PLEG): container finished" podID="fd52c34b-5bb3-4513-a475-dfcc5dfa9471" containerID="623ea442859458226d1172185d5fcf7a61052e6442f4ac619e735f5a9a9b2562" exitCode=0 Mar 14 10:26:04 crc kubenswrapper[4687]: I0314 10:26:04.028283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" event={"ID":"fd52c34b-5bb3-4513-a475-dfcc5dfa9471","Type":"ContainerDied","Data":"623ea442859458226d1172185d5fcf7a61052e6442f4ac619e735f5a9a9b2562"} Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.396149 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.495807 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8cwn\" (UniqueName: \"kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn\") pod \"fd52c34b-5bb3-4513-a475-dfcc5dfa9471\" (UID: \"fd52c34b-5bb3-4513-a475-dfcc5dfa9471\") " Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.506455 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn" (OuterVolumeSpecName: "kube-api-access-j8cwn") pod "fd52c34b-5bb3-4513-a475-dfcc5dfa9471" (UID: "fd52c34b-5bb3-4513-a475-dfcc5dfa9471"). InnerVolumeSpecName "kube-api-access-j8cwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.598783 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8cwn\" (UniqueName: \"kubernetes.io/projected/fd52c34b-5bb3-4513-a475-dfcc5dfa9471-kube-api-access-j8cwn\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.896662 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:05 crc kubenswrapper[4687]: E0314 10:26:05.897645 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd52c34b-5bb3-4513-a475-dfcc5dfa9471" containerName="oc" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.897667 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd52c34b-5bb3-4513-a475-dfcc5dfa9471" containerName="oc" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.897924 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd52c34b-5bb3-4513-a475-dfcc5dfa9471" containerName="oc" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.899849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:05 crc kubenswrapper[4687]: I0314 10:26:05.908992 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.008069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxfw\" (UniqueName: \"kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.008131 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.008173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.063735 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" event={"ID":"fd52c34b-5bb3-4513-a475-dfcc5dfa9471","Type":"ContainerDied","Data":"f3701bd18a1ecea5c8547c1b31ca795b77bac4665194b951bcbc2a62b8ec5fb0"} Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.063773 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3701bd18a1ecea5c8547c1b31ca795b77bac4665194b951bcbc2a62b8ec5fb0" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.063828 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-9rd6t" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.113578 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxfw\" (UniqueName: \"kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.113643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.113660 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.114188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.114722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.127497 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-tt4nq"] Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.140274 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxfw\" (UniqueName: \"kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw\") pod \"certified-operators-2qd4q\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.162101 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-tt4nq"] Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.228043 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.695745 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:06 crc kubenswrapper[4687]: W0314 10:26:06.701091 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c88bb8_aeaa_4e74_b43f_0c51f8ba7ee7.slice/crio-6de4231b6666135b7094c6eefbad177c7eb1dd24e00c1ff7756c9e6517b2cd19 WatchSource:0}: Error finding container 6de4231b6666135b7094c6eefbad177c7eb1dd24e00c1ff7756c9e6517b2cd19: Status 404 returned error can't find the container with id 6de4231b6666135b7094c6eefbad177c7eb1dd24e00c1ff7756c9e6517b2cd19 Mar 14 10:26:06 crc kubenswrapper[4687]: I0314 10:26:06.736945 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:26:06 crc kubenswrapper[4687]: E0314 10:26:06.737158 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:26:07 crc kubenswrapper[4687]: I0314 10:26:07.077001 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" containerID="bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28" exitCode=0 Mar 14 10:26:07 crc kubenswrapper[4687]: I0314 10:26:07.077092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerDied","Data":"bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28"} Mar 14 10:26:07 crc kubenswrapper[4687]: I0314 10:26:07.077355 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerStarted","Data":"6de4231b6666135b7094c6eefbad177c7eb1dd24e00c1ff7756c9e6517b2cd19"} Mar 14 10:26:07 crc kubenswrapper[4687]: I0314 10:26:07.753628 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de177a96-56d3-4450-9e34-ab28e17b5fd1" path="/var/lib/kubelet/pods/de177a96-56d3-4450-9e34-ab28e17b5fd1/volumes" Mar 14 10:26:08 crc kubenswrapper[4687]: I0314 10:26:08.092053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerStarted","Data":"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255"} Mar 14 10:26:10 crc kubenswrapper[4687]: I0314 10:26:10.116102 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" containerID="a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255" exitCode=0 Mar 14 10:26:10 crc kubenswrapper[4687]: I0314 10:26:10.116169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerDied","Data":"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255"} Mar 14 10:26:11 crc kubenswrapper[4687]: I0314 10:26:11.130323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerStarted","Data":"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38"} Mar 14 10:26:11 crc kubenswrapper[4687]: I0314 10:26:11.158617 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2qd4q" podStartSLOduration=2.73286599 podStartE2EDuration="6.158597004s" podCreationTimestamp="2026-03-14 10:26:05 +0000 UTC" firstStartedPulling="2026-03-14 10:26:07.079295526 +0000 UTC m=+5352.067535941" lastFinishedPulling="2026-03-14 10:26:10.50502657 +0000 UTC m=+5355.493266955" observedRunningTime="2026-03-14 10:26:11.147823838 +0000 UTC m=+5356.136064213" watchObservedRunningTime="2026-03-14 10:26:11.158597004 +0000 UTC m=+5356.146837399" Mar 14 10:26:13 crc kubenswrapper[4687]: I0314 10:26:13.737037 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:26:13 crc kubenswrapper[4687]: E0314 10:26:13.737746 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s5gw5_openshift-machine-config-operator(c28f39ed-17ae-4d24-9fa5-cea877046b6f)\"" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" podUID="c28f39ed-17ae-4d24-9fa5-cea877046b6f" Mar 14 10:26:15 crc kubenswrapper[4687]: I0314 10:26:15.745886 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:26:15 crc kubenswrapper[4687]: E0314 10:26:15.746650 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:26:16 crc kubenswrapper[4687]: I0314 10:26:16.228350 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:16 crc kubenswrapper[4687]: I0314 10:26:16.228631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:16 crc kubenswrapper[4687]: I0314 10:26:16.297164 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:17 crc kubenswrapper[4687]: I0314 10:26:17.831038 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:17 crc kubenswrapper[4687]: I0314 10:26:17.895385 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.219037 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2qd4q" podUID="d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" containerName="registry-server" containerID="cri-o://bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38" gracePeriod=2 Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.717107 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.835815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content\") pod \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.835924 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blxfw\" (UniqueName: \"kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw\") pod \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.836005 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities\") pod \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\" (UID: \"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7\") " Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.836778 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities" (OuterVolumeSpecName: "utilities") pod "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" (UID: "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.843031 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw" (OuterVolumeSpecName: "kube-api-access-blxfw") pod "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" (UID: "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7"). InnerVolumeSpecName "kube-api-access-blxfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.889013 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" (UID: "d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.939787 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.939994 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blxfw\" (UniqueName: \"kubernetes.io/projected/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-kube-api-access-blxfw\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:19 crc kubenswrapper[4687]: I0314 10:26:19.940122 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.228917 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" containerID="bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38" exitCode=0 Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.228953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerDied","Data":"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38"} Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.228983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qd4q" event={"ID":"d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7","Type":"ContainerDied","Data":"6de4231b6666135b7094c6eefbad177c7eb1dd24e00c1ff7756c9e6517b2cd19"} Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.229000 4687 scope.go:117] "RemoveContainer" containerID="bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.229001 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qd4q" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.254050 4687 scope.go:117] "RemoveContainer" containerID="a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.282385 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.285624 4687 scope.go:117] "RemoveContainer" containerID="bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.289568 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2qd4q"] Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.346525 4687 scope.go:117] "RemoveContainer" containerID="bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38" Mar 14 10:26:20 crc kubenswrapper[4687]: E0314 10:26:20.346940 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38\": container with ID starting with bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38 not found: ID does not exist" containerID="bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.346994 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38"} err="failed to get container status \"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38\": rpc error: code = NotFound desc = could not find container \"bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38\": container with ID starting with bec8ceee8175139bcb80e61a486fe6dfd52802615312ac7465b5958008249f38 not found: ID does not exist" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.347019 4687 scope.go:117] "RemoveContainer" containerID="a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255" Mar 14 10:26:20 crc kubenswrapper[4687]: E0314 10:26:20.347377 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255\": container with ID starting with a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255 not found: ID does not exist" containerID="a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.347397 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255"} err="failed to get container status \"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255\": rpc error: code = NotFound desc = could not find container \"a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255\": container with ID starting with a905aabaa8fe4827c02af48138e20093b736b09445fea4ca2315c23250a57255 not found: ID does not exist" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.347411 4687 scope.go:117] "RemoveContainer" containerID="bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28" Mar 14 10:26:20 crc kubenswrapper[4687]: E0314 10:26:20.347778 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28\": container with ID starting with bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28 not found: ID does not exist" containerID="bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.347830 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28"} err="failed to get container status \"bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28\": rpc error: code = NotFound desc = could not find container \"bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28\": container with ID starting with bd0941a8f864f386ef2de2e8a9d1b06efa8525697d06abaa223d561aee7bcd28 not found: ID does not exist" Mar 14 10:26:20 crc kubenswrapper[4687]: I0314 10:26:20.737300 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:26:20 crc kubenswrapper[4687]: E0314 10:26:20.737605 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:26:21 crc kubenswrapper[4687]: I0314 10:26:21.749480 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7" path="/var/lib/kubelet/pods/d5c88bb8-aeaa-4e74-b43f-0c51f8ba7ee7/volumes" Mar 14 10:26:24 crc kubenswrapper[4687]: I0314 10:26:24.737716 4687 scope.go:117] "RemoveContainer" containerID="302cec90cdb38227647a0d4bf825cecc72277ef677fa4af0e27bff7bff97a9ec" Mar 14 10:26:25 crc kubenswrapper[4687]: I0314 10:26:25.301962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s5gw5" event={"ID":"c28f39ed-17ae-4d24-9fa5-cea877046b6f","Type":"ContainerStarted","Data":"005d8331ac62b6ca7d1e21c9811f6c4cf9f1e0383cf2ef476830ffe3d44513f0"} Mar 14 10:26:30 crc kubenswrapper[4687]: I0314 10:26:30.737528 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:26:30 crc kubenswrapper[4687]: E0314 10:26:30.738116 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:26:31 crc kubenswrapper[4687]: I0314 10:26:31.738395 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:26:31 crc kubenswrapper[4687]: E0314 10:26:31.739291 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:26:32 crc kubenswrapper[4687]: I0314 10:26:32.396251 4687 scope.go:117] "RemoveContainer" containerID="2ab6e0f4c9fdab20b2e251117651cec7ddf2261a454f9a2cf15e1ede90d1fa9e" Mar 14 10:26:42 crc kubenswrapper[4687]: I0314 10:26:42.737225 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:26:42 crc kubenswrapper[4687]: E0314 10:26:42.737842 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:26:44 crc kubenswrapper[4687]: I0314 10:26:44.737422 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:26:44 crc kubenswrapper[4687]: E0314 10:26:44.738054 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:26:57 crc kubenswrapper[4687]: I0314 10:26:57.737752 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:26:57 crc kubenswrapper[4687]: E0314 10:26:57.738825 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:26:59 crc kubenswrapper[4687]: I0314 10:26:59.737846 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:26:59 crc kubenswrapper[4687]: E0314 10:26:59.738658 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:27:08 crc kubenswrapper[4687]: I0314 10:27:08.737774 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:27:08 crc kubenswrapper[4687]: E0314 10:27:08.738875 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:27:12 crc kubenswrapper[4687]: I0314 10:27:12.737436 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:27:12 crc kubenswrapper[4687]: E0314 10:27:12.738529 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:27:19 crc kubenswrapper[4687]: I0314 10:27:19.739074 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:27:19 crc kubenswrapper[4687]: E0314 10:27:19.740194 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c" Mar 14 10:27:24 crc kubenswrapper[4687]: I0314 10:27:24.737269 4687 scope.go:117] "RemoveContainer" containerID="7ce4972871413da19bacbe25b2bbb62b216da876a5fc8a8faf1742f0330992c5" Mar 14 10:27:24 crc kubenswrapper[4687]: E0314 10:27:24.738222 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-74f987fc4-zw2rw_openstack(a89460b9-5c8a-4000-ac6a-6202699a10d1)\"" pod="openstack/horizon-74f987fc4-zw2rw" podUID="a89460b9-5c8a-4000-ac6a-6202699a10d1" Mar 14 10:27:30 crc kubenswrapper[4687]: I0314 10:27:30.737774 4687 scope.go:117] "RemoveContainer" containerID="e3838ffb167d348fc14136800b6e6abd46eac30bc570002f8ffb2f5e53a056e2" Mar 14 10:27:30 crc kubenswrapper[4687]: E0314 10:27:30.739071 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7dcd9ff5b-bprxd_openstack(00a62493-95c1-4765-8b9e-4188b68c587c)\"" pod="openstack/horizon-7dcd9ff5b-bprxd" podUID="00a62493-95c1-4765-8b9e-4188b68c587c"